blob: afa2063d41306ec8c2eb49b6f0f76654cda18417 [file] [log] [blame]
<?xml version="1.0"?>
<!DOCTYPE document SYSTEM "./dtd/document-v10.dtd">
<!-- ========================================================================= -->
<!-- Copyright (C) The Apache Software Foundation. All rights reserved. -->
<!-- -->
<!-- This software is published under the terms of the Apache Software License -->
<!-- version 1.1, a copy of which has been included with this distribution in -->
<!-- the LICENSE file. -->
<!-- ========================================================================= -->
<!-- ========================================================================= -->
<!-- author vincent.hardy@eng.sun.com -->
<!-- version $Id$ -->
<!-- ========================================================================= -->
<document>
<header>
<title>Batik - Test Infrastructure</title>
<subtitle>How testing is done in Batik</subtitle>
<authors>
<person name="Vincent Hardy" email="vincent.hardy@eng.sun.com"/>
</authors>
</header>
<body>
<s1 title="Introduction">
<p>This document describes the Batik test infrastructure whose goals
are to:</p>
<ul>
<li>Make it easy to detect regressions</li>
<li>Make it easy to run test suites</li>
<li>Make it easy to write new tests and add them
to an existing test suite</li>
</ul>
<p>The intent for the test infrastructure is that it grows
along with Batik and keeps monitoring the health of
the code base. </p>
<p>While the test suites in the infrastructure will be
run every day by build/test machines, they are also
intended to help the commiters and developers get confident
that their code modifications did not introduce regressions.</p>
<p>This document describes:</p>
<ul>
<li><link href="#infrastructure">Test infrastructure</link>This may be a little
dry, but it helps understanding the format of the files used to run
the test suites</li>
<li><link href="#managingATestSuite">How to describe and run a test suite</link></li>
<li><link href="#regardReplacement">The 'regard' replacement</link></li>
<li><link href="#regsvggenReplacement">The 'regsvggen' replacement</link></li>
<li><link href="#writingNewTests">Writing new Tests</link></li>
</ul>
</s1>
<anchor id="infrastructure" />
<s1 title="The Test infrastructure">
<s2 title="High-Level Interfaces">
<p>The following are the high level
interfaces in the infrastructure</p>
<!-- <figure src="images/testHighLevel.jpg" alt="Batik Viewer"/> -->
<ul>
<li>A <code>Test</code> is performing whatever check
is needed in its <code>run</code> method, and each
run produces a <code>TestReport</code></li>
<li>A <code>TestReport</code> describes whether
a <code>Test</code> run passed or failed and provides a
description of the failure in terms of an error
code (unique in the context of a given <code>Test</code>)
and a set of key/value pairs</li>
<li>A <code>TestSuite</code> is a <code>Test</code>
aggregation which can run a set of <code>Test</code> instances </li>
<li>A <code>TestReportProcessor</code> is used to
analyze a <code>TestReport</code>. A specific implementation
can choose to create graphs, send an email or write
an HTML file</li>
</ul>
</s2>
<s2 title="Default Implementations">
<p>The test infrastructure comes with a number of default
implementations for the interfaces described above.
Specifically:</p>
<ul>
<li><code>AbstractTest</code>. This implementation of
the <code>Test</code> interface is intended to make it
easier to write a 'safe' <code>Test</code> implementation.
See <link href="#writingNewTests">"Writing New Tests"</link>
for a description of how to use that class.</li>
<li><code>DefaultTestReport</code> provides a simple
implementation of the <code>TestReport</code> interface
that most <code>Test</code> implementation will be able to
use. See <link href="#writingNewTests">"Writing New Tests"</link> for more details.</li>
<li><code>DefaultTestSuite</code> provides an implementation
of the <code>TestSuite</code> interface and makes it
easy to aggregate <code>Test</code> instances.</li>
<li><code>SimpleTestReportProcessor</code> is a sample
<code>TestReportProcessor</code> implementation that
simply traces the content of a <code>TestReport</code> to
an output stream</li>
<li><code>TestReportMailer</code> is another implementation
of the <code>TestReportProcessor</code> interface that
emails a test report to a list of destination emails.</li>
</ul>
</s2>
<s2 title="XML Implementations">
<p>The test infrastructure is using XML-out (and XML-in
too, see <link href="#runningATestSuite">"Running a test suite"</link>) as a favorite way to
generate test reports. The <code>XMLTestReportProcessor</code>
implementation of the <code>TestReportProcessor</code> interface.
outputs reports in XML in a configurable directory.</p>
<p>The <code>XMLTestReportProcessor</code> can notify a
<code>XMLReportConsumer</code> when it has created a new
report. There is one implementation of that interface
by default that can run an XSL stylesheet on the
XML report (e.g., to generate an HTML report) and that
is done by the <code>XSLXMLReportConsumer</code>. This is used
by the 'regard' rule in the Batik build to produce an HTML report for
the default regression test suite.</p>
</s2>
</s1>
<anchor id="managingATestSuite" />
<s1 title="Managing Test Suites">
<p>The infrastructure tries to make it easy to create, update and
modify test suites. This section describes how to describe a set
of tests to be run and how to actually run that test suite</p>
<s2 title="Describing a Test Suite">
<p>Test suites can be described in XML (XML-in refered to earlier
in this document). The general format for describing a test suite is:</p>
<source>
&lt;testSuite name="MyFavoriteTestSuite"&gt;
&lt;!-- =================================== --&gt;
&lt;!-- Set of tests to be run --&gt;
&lt;!-- =================================== --&gt;
&lt;test class="myFavoriteTestClassA" /&gt;
&lt;test class="myFavoriteTestClassB" /&gt;
&lt;test class="myFavoriteTestClassC" /&gt;
&lt;/testSuite&gt;
</source>
<p>This simply list the test of <code>Test</code> instances that compose
a given test suite. For example: </p>
<source>
&lt;testSuite name="SAMPLE TEST SUITE"&gt;
&lt;!-- ========================================================================== --&gt;
&lt;!-- Validates that the SVGRenderingAccuracyTest class is operating as expected --&gt;
&lt;!-- ========================================================================== --&gt;
&lt;test class="org.apache.batik.test.svg.SVGRenderingAccuracyTestValidator" /&gt;
&lt;!-- ========================================================================== --&gt;
&lt;!-- Rendering regression tests --&gt;
&lt;!-- ========================================================================== --&gt;
&lt;test class="org.apache.batik.test.svg.SVGRenderingAccuracyTest"&gt;
&lt;arg class="java.net.URL"
value="file:samples/anne.svg" /&gt;
&lt;arg class="java.net.URL"
value="file:test-references/samples/solaris/anne.png" /&gt;
&lt;property name="VariationURL"
class="java.net.URL"
value="file:test-references/samples/variation/anne.png" /&gt;
&lt;property name="SaveVariation"
class="java.io.File"
value="test-references/samples/variation-candidate/anne.png" /&gt;
&lt;/test&gt;
&lt;/testSuite&gt;
</source>
</s2>
<anchor id="runningATestSuite" />
<s2 title="Running a Test Suite">
<p>
Yet another XML file describes which test to run and how to process the
generated test reports. The general syntax is something like:</p>
<source>
&lt;testRun name="Test Run Name Here"&gt;
&lt;testRun name="REGARD"&gt;
&lt;!-- =================================== --&gt;
&lt;!-- Descriptions of processors that --&gt;
&lt;!-- will process the results of the --&gt;
&lt;!-- test suite --&gt;
&lt;!-- =================================== --&gt;
&lt;testReportProcessor class="myFavoriteReportProcessorA" /&gt;
&lt;testReportProcessor class="myFavoriteReportProcessorB" /&gt;
&lt;!-- =================================== --&gt;
&lt;!-- Set of test suite to run. They will --&gt;
&lt;!-- produce TestReports. --&gt;
&lt;!-- =================================== --&gt;
&lt;testSuite href="http://url.to.my.first.test.suite"/&gt;
&lt;testSuite href="http://url.to.my.second.test.suite" /&gt;
&lt;/testRun&gt;
&lt;/testRun&gt;
</source>
<p><code>&lt;testRun&gt;</code> elements can be nested. In a nutshell, you can specify a set of <code>TestReportProcessor</code> which
should process the <code>TestReport</code> generated by the <code>TestSuite</code> built
from the list of <code>Test</code> instances described in the referenced <code>&lt;testSuite&gt;</code> file.
For example:</p>
<source>
&lt;testRun name="Batik Standard Regression Test Run"&gt;
&lt;testRun name="REGARD"&gt;
&lt;testReportProcessor class="org.apache.batik.test.xml.XMLTestReportProcessor" &gt;
&lt;arg class="org.apache.batik.test.xml.XSLXMLReportConsumer"&gt;
&lt;!-- Stylesheet --&gt;
&lt;arg class="java.lang.String" value="file:test-resources/org/apache/batik/test/svg/HTMLReport.xsl" /&gt;
&lt;!-- Ouput Directory --&gt;
&lt;arg class="java.lang.String" value="test-reports/html" /&gt;
&lt;!-- Output file prefix --&gt;
&lt;arg class="java.lang.String" value="RegardResult" /&gt;
&lt;!-- Output file suffix --&gt;
&lt;arg class="java.lang.String" value=".html" /&gt;
&lt;/arg&gt;
&lt;/testReportProcessor&gt;
&lt;testSuite href="file:test-resources/org/apache/batik/test/samplesRendering.xml" /&gt;
&lt;testSuite href="file:test-resources/org/apache/batik/svggen/regsvggen.xml" /&gt;
&lt;testSuite href="file:test-resources/org/apache/batik/test/unitTesting.xml" /&gt;
&lt;/testRun&gt;
&lt;/testRun&gt;
</source>
<p>There is now a rule in our <code>build.xml</code> to run a test suite defined in
an XML file as the one above. At the command line, type the following:</p>
<p><code>build runtestsuite path/to/my/newly/created/testSuite.xml</code>.
In addition, the <em>regard</em> rule runs a specific set of tests by default,
so that you do not need to pass any testRun file argument.</p>
</s2>
</s1>
<anchor id="regardReplacement" />
<s1 title="The 'regard' replacement">
<p>'regard' is the tool we have used so far to detect regressions in how we render SVG images</p>
<p>There is a <code>Test</code> implementation, <code>SVGRenderingAccuracyTest</code> which takes over
the responsibility that regard has and then goes beyond.</p>
<s2 title="The SVGRenderingAccuracyTest configuration">
<p>An <code>SVGRenderingAccuracyTest</code>'s constructor configuration is made of</p>
<ul>
<li>The URL to the SVG it should render</li>
<li>The URL to a reference PNG file</li>
</ul>
<p>The default behavior for the test is to render the SVG into a PNG file and compare with
the reference image. If there is not difference, the test passes. Otherwise, it fails.</p>
<p>In addition to this default behavior, the <code>SVGRenderingAccuracyTest</code> can
take an optional configuration parameter, an image URL defined as an 'accepted' variation
around the reference image. If such a variation image is specified, then the test will pass if:</p>
<ul>
<li>The rasterized SVG is equal to the reference image</li>
<li>Or, the difference between the rasterized SVG and the reference image is
exactly the same as the accepted variation image</li>
</ul>
<p>Finally, to ease the process of creating 'accepted' variation images, <code>
SVGRenderingAccuracyTest</code> can take an optional file name (called 'saveVariation')
describing where the variation between
the rasterized SVG and the reference image will be stored in case the rasterized SVG
is different from the reference image and the difference is not equal to the variation
image, if any was defined. That way, it becomes possible to run a test. It that test fails,
the developer can review the saveVariation image and decide whether it is an acceptable
variation or not and use it in subsequent test run as the 'accepted' variation image, which
will allow the test to pass if that exact same variation remains constant.</p>
</s2>
<s2 title="Day to day use of regard replacement">
<p>Concretely, regard is replaced by a specific test suite description, regard.xml. That
file contains a set of test files (the ones in the samples and samples/test directories)
for which rendering regressions should be checked.</p>
<p><em>Initial set up</em></p>
<p>
To set up the test environment the first time, you need to:</p>
<ul>
<li>Check-out the latest version of the code, including the test-xx directories
(sources, resources and references) and the build.xml file</li>
<li>Run the regard test suite once:<code>build regard</code></li>
</ul>
<p>This will generate an HTML test report in the <code>test-reports/html</code> directory.
Depending on how much different text rendering is between your work environment and the
environment used to create the reference images, you will get more or less test that will fail,
because of differences in the way text is rendered on various platforms and because of
fonts not being available on some platforms. For example, a running the test on a Windows 2000
laptop against images generated on the Solaris platform caused 16 tests out of 71 to fail.</p>
<p>Review the HTML report to make sure that the differences are really due to text variations.
This will usually be the case and you can make sure by clicking on the diff images contained
in the report to see them at full scale. You can you can then turn the 'candidate' variations generated by
the test into 'accepted' variations by moving files from one directory to another:</p>
<p><code>mv test-references/samples/candidate-variations/*.png test-references/samples/accepted-variations/*.png</code></p>
<p><code>mv test-references/samples/tests/candidate-variations/*.png test-references/samples/tests/accepted-variations/*.png</code></p>
<p />
<p>You can now run the test again:</p>
<p><code>build regard</code></p>
<p>Check the newly generated HTML report in the test-reports/html directory: there should not
longer be any test failure</p>
<p />
<p><em>Daily usage</em></p>
<p />
<p>Once the intial set-up has been done, you can use regard by simply updating your
CVS copy, including the test-references. If no change occurs, your test will keep passing
with your reference images. If a test fails (e.g., if someone checks in a new reference
image from a platform different than the one you are using, you will have to check if it is
because of system specific reasons or if there is a bigger problem.</p>
</s2>
</s1>
<anchor id="regsvggenReplacement" />
<s1 title="The 'regsvggen' replacement">
<p>The regsvggen tool has been deprecated and replaced by a test suite (which you can
find in <code>test-resources/org/apache/batik/svggen/regsvggen.xml</code> in the
source distribution.</p>
</s1>
<anchor id="writingNewTests" />
<s1 title="Writing new Tests">
<p>Writing a new Test involves either configuring a new test or writing a new <code>Test</code> class. In
both cases, you will need to add an entry to a test suite's XML description. This section uses two
test suites as an example: the 'regard' test suite to show how to configure a new test and the
'unitTests' test suite to show how to add a new <code>Test</code> implementation.</p>
<s2 title="Adding a new test configuration">
<p>Imagine that you add a cool new test case to the samples directory, such as <em>linking-viewBox.svg</em>.
In order to check for regressions on that file you can add the following entry:</p>
<source>
&lt;arg class="java.net.URL"
value="file:samples/tests/linkingViewBox.svg" /&gt;
&lt;arg class="java.net.URL"
value="file:test-references/samples/tests/solaris/linkingViewBox.png" /&gt;
&lt;property name="VariationURL"
class="java.net.URL"
value="file:test-references/samples/tests/variation/linkingViewBox.png" /&gt;
&lt;property name="SaveVariation"
class="java.io.File"
value="test-references/samples/tests/variation-candidate/linkingViewBox.png" /&gt;
&lt;/test&gt;
</source>
<p>to the test-resources/org/apache/batik/test/samplesRendering.xml test suite description, the
description of the regard test suite. If you have access to the build machine where the
reference images are typically generated, you can check-in the reference image in
test-references/samples/tests. Otherwise (and this is ok), you can let the test fail the
first time it is run on the build/test machine and that will be a reminder for whoever
is responsible for that machine that a valid reference image should be checked in.</p>
</s2>
<s2 title="Writing a new test">
<p>Imagine you want to validate some aspect of your code, and let's take the bridge error
handling as an example. You could create a new class in the <code>test-sources</code>
area, in the org/apache/batik/bridge in our example, and let's call it <code>ErrorHandlingTest</code>.
To simplify the implementation of the <code>Test</code> interface, you can choose
to derive from the <code>AbstractTest</code> class and generate a <code>DefaultTestReport</code>.</p>
<p>While writing the <code>Test</code> you may want to use your own XML file with just
your test, for example:</p>
<source>
&lt;testReportProcessor class="org.apache.batik.test.SimpleTestReportProcessor" /&gt;
&lt;test class="org.apache.batik.bridge.ErrorHandlingTest"&gt;
&lt;!-- Expected error code --&gt;
&lt;arg class="java.lang.String" value="expected.error.code" /&gt;
&lt;!-- Input SVG that this test manipulates to generate error conditions --&gt;
&lt;arg class="java.net.URL" value="file:test-resources/org/apache/batik/bridge/ErrorHandlingBase.svg" /&gt;
&lt;!-- Id of the element to test --&gt;
&lt;arg class="java.lang.String value="rectangle6" /&gt;
&lt;!-- Attribute to test --&gt;
&lt;arg class="java.lang.String value="x" /&gt;
&lt;!-- Value to test on the attribute --&gt;
&lt;arg class="java.lang.String value="abcd" /&gt;
&lt;/test&gt;
</source>
<p>This is just an example and does not pretend to be the right way to go about implementing
or specifying this specific type of test. Once done with tuning the test, one or multiple
configurations for the test can be added to the relevant test suite's XML description. In
some cases, it will be interesting to create a separate test suite.</p>
</s2>
</s1>
</body>
</document>