blob: 2752f54d5a0d1c05511dbcf186642ed27cb8f228 [file] [log] [blame]
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/**
* Transport test kit base package.
*
* <h2>Introduction and high level overview</h2>
*
* In general a good test suite for an Axis2 transport should contain test cases that
* <ul>
* <li>test the transport sender in isolation, i.e. with non Axis2 endpoints;</li>
* <li>test the transport listener in isolation, i.e. with non Axis2 clients;</li>
* <li>test the interoperability between the transport sender and the transport listener.</li>
* </ul>
* In addition, the test suite should cover
* <ul>
* <li>different message exchange patterns (at least one-way and request-response);</li>
* <li>different content types (SOAP 1.1/1.2, POX, SOAP with attachments, MTOM, plain text, binary, etc.).</li>
* </ul>
* Also for some transports it is necessary to execute the tests with different transport
* configurations or with different protocol providers. For example, HTTP transport implementations
* are tested in HTTP 1.0 and HTTP 1.1 mode, and the JMS transport is tested with different
* JMS providers (currently Qpid and ActiveMQ).
* <p>
* The test kit grew out of the idea that is should be possible to apply a common set of tests
* (with different MEPs and content types) to several transports with a minimum of code duplication.
* By providing non Axis2 test clients and endpoints as well as the code that sets up the
* necessary environment as input, the framework should then be able to build a complete test suite
* for the transport.
* <p>
* It is clear that since each transport protocol has its own specificities, a high level of abstraction
* is required to achieve this goal. The following sections give a high level overview of the
* various abstractions that have been introduced in the test kit.
*
* <h3>Integration with JUnit</h3>
*
* One of the fundamental requirements for the test kit is to integrate well with JUnit.
* This requirement ensures that the tests can be executed easily as part of the Maven
* build and that other available tools such as test report generators and test coverage
* analysis tools can be used.
* <p>
* The usual approach to write JUnit tests is to extend {@link junit.framework.TestCase.TestCase}
* and to define a set of methods that implement the different test cases. Since the goal of the framework
* is to build test suites in an automated way and the number of test cases can be fairly high, this
* approach would not be feasible. Fortunately JUnit supports another way to create a test suite
* dynamically. Indeed JUnit scans the test code for methods with the following signature:
*
* <pre>public static TestSuite suite()</pre>
*
* A typical transport test will implement this method and use {@link org.apache.axis2.transport.testkit.TransportTestSuiteBuilder}
* to let the framework create the test suite.
*
* <h3>Test case naming</h3>
*
* One problem that immediately arises when building a test suite dynamically is that each test
* case must have a name (which should be unique) and that this name should be sufficiently meaningful
* so that when it appears in a report a human should be able to get a basic idea of what the test case does.
* The names generated by the test kit have two parts:
* <ul>
* <li>A numeric ID which is the sequence number of the test case in the test suite.</li>
* <li>A set of key-value pairs describing the components that are used in the test case.</li>
* </ul>
* Example:
*
* <pre>0076:test=REST,client=java.net,endpoint=axis</pre>
*
* The algorithm used by the test kit to collect the key-value pairs is described in the documentation of
* the {@link org.apache.axis2.transport.testkit.name} package.
*
* <h3>Resource management</h3>
*
* In general setting up the environment in which a given test case is executed may be quite expensive.
* For example, running a test case for the JMS transport requires starting a message broker. Also
* every test case requires at least an Axis2 client and/or server environment to deploy the transport.
* Setting up and tearing down the entire environment for every single test case would be far too
* expensive. On the other hand the environments required by different test cases in a single test suite
* are in general very different from each other so that it would not possible to set up a common
* environment used by all the test cases.
* <p>
* To overcome this difficulty, the test kit has a mechanism that allows a test case to reuse resources
* from the previous test case. This is managed in an entirely transparent way by a lightweight
* dependency injection container (see [TODO: need to regroup this code in a single package]), so that
* the test case doesn't need to care about it.
* <p>
* The mechanism is based on a set of simple concepts: [TODO: this is too detailed for a high level overview and
* should be moved to the Javadoc of the relevant package]
* <ul>
* <li><p>Every test case is linked to a set of <em>resources</em> which are plain Java objects (that are not
* required to extend any particular class or implement any particular interface).
* These objects define the <em>resource set</em> of the test case (which is represented
* internally by a {@link org.apache.axis2.transport.testkit.tests.TestResourceSet}
* object).</p></li>
* <li><p>The lifecycle of a resource is managed through methods annotated by
* {@link org.apache.axis2.transport.testkit.tests.Setup} and {@link org.apache.axis2.transport.testkit.tests.TearDown}.
* These annotations identify the methods to be called when the framework sets up and tears down the resource.
* The arguments of the methods annotated using {@link org.apache.axis2.transport.testkit.tests.Setup} also
* define the <em>dependencies</em> of that resource.</p>
* <p>Example:</p>
* <pre>public class MyTestClient {
* \@Setup
* private void setUp(MyProtocolProvider provider) throws Exception {
* provider.connect();
* }
*}</pre>
* <p>As shown in this example, dependencies are specified by class (which may be abstract). The actual
* instance that will be injected is selected during <em>resource resolution</em>.</p></li>
* <li><p>Resources are (in general) resolved from the resource set of the test case. For example an instance
* of the <code>MyTestClient</code> class can only be used as a resource for a given test case
* if the resource set of this test case also contains an instance of <code>MyProtocolProvider</code>
* (more precisely an object that is assignment compatible with <code>MyProtocolProvider</code>).</p></li>
* <li><p>A resource will be reused across two test cases if it is part of the resource sets of both
* test cases and all its dependencies (including transitive dependencies) are part of both resource sets.
* The precise meaning of "reusing" in this context is using the same instance without calling the
* tear down and set up methods.</p>
* <p>For example, consider the following test cases and resource sets:</p>
* <table border="1">
* <tr><th>Test case</th><th>Resource set</th></tr>
* <tr><td>T1</td><td><code>c:MyTestClient</code>, <code>p1:MyProtocolProvider</code></td></tr>
* <tr><td>T2</td><td><code>c:MyTestClient</code>, <code>p1:MyProtocolProvider</code>, <code>r:SomeOtherResourceType</code></td></tr>
* <tr><td>T3</td><td><code>c:MyTestClient</code>, <code>p2:MyProtocolProvider</code>, <code>r:SomeOtherResourceType</code></td></tr>
* </table>
* <p>Assuming that <code>SomeOtherResourceType</code> is independent of <code>MyTestClient</code> and
* <code>MyProtocolProvider</code>, the lifecycle of the different resources will be as follows:</p>
* <table border="1">
* <tr><th>Transition</th><th>Lifecycle actions</th></tr>
* <tr><td>&bull; &rarr; T1</td><td>set up <code>p1</code>, set up <code>c</code></td></tr>
* <tr><td>T1 &rarr; T2</td><td>set up <code>r</code></td></tr>
* <tr><td>T2 &rarr; T3</td><td>tear down <code>c</code>, tear down <code>p1</code>, set up <code>p2</code>, set up <code>c</code></td></tr>
* <tr><td>T3 &rarr; &bull;</td><td>tear down <code>c</code>, tear down <code>p2</code>, tear down <code>r</code></td></tr>
* </table>
* <p>Even if T2 and T3 use the same instance <code>c</code> of <code>MyTestClient</code>, this resource
* is not reused (in the sense defined above) since the <code>MyProtocolProvider</code> dependency
* resolves to different instances.</p></li>
* </ul>
*
* <h3>Resources required by a transport test case</h3>
*
* Every transport test case (extending {@link org.apache.axis2.transport.testkit.tests.MessageTestCase})
* at least requires three resources:
* <ul>
* <li>A test client ({@link org.apache.axis2.transport.testkit.client.AsyncTestClient}
* or {@link org.apache.axis2.transport.testkit.client.RequestResponseTestClient}) that
* allows the test case to send messages (and receive responses).</li>
* <li>A test endpoint ({@link org.apache.axis2.transport.testkit.endpoint.AsyncEndpoint}
* or {@link org.apache.axis2.transport.testkit.endpoint.InOutEndpoint}). In the one-way case,
* this resource is used to receive requests send by the test client. In the request-response
* case its responsibility is to generate well defined responses (typically a simple echo).</li>
* <li>A channel ({@link org.apache.axis2.transport.testkit.channel.AsyncChannel} or
* {@link org.apache.axis2.transport.testkit.channel.RequestResponseChannel}. This resource
* manages everything that it necessary to transport a message from a client to an endpoint.
* Depending on the transport this task can be fairly complex. For example, in the JMS case,
* the channel creates the required JMS destinations and registers them in JNDI, so that
* they can be used by the client and by the endpoint. On the other hand, for HTTP the
* channel implementation is very simple and basically limited to the computation of the
* endpoint reference.</li>
* </ul>
* <p>The test kit provides the following Axis2 based test client and endpoint implementations:</p>
* <table border="1">
* <tr>
* <th></th>
* <th>One-way</th>
* <th>Request-response</th>
* </tr>
* <tr>
* <th>Client</th>
* <td>{@link org.apache.axis2.transport.testkit.axis2.client.AxisAsyncTestClient}</td>
* <td>{@link org.apache.axis2.transport.testkit.axis2.client.AxisRequestResponseTestClient}</td>
* </tr>
* <tr>
* <th>Endpoint</th>
* <td>{@link org.apache.axis2.transport.testkit.axis2.endpoint.AxisAsyncEndpoint}</td>
* <td>{@link org.apache.axis2.transport.testkit.axis2.endpoint.AxisEchoEndpoint}</td>
* </tr>
* </table>
*
* <h3>Message encoders and decoders</h3>
*
* Different clients, endpoints and test cases may have fairly different ways to "naturally" represent
* a message:
* <ul>
* <li>To test the listener of an HTTP transport, an obvious choice is to build a test client
* that relies on standard Java classes such as {@link java.net.URLConnection}. For that
* purpose the most natural way to represent a message is as a byte sequence.</li>
* <li>All Axis2 based test clients and endpoints already have a canonical message
* representation, which is the SOAP infoset retrieved by
* {@link org.apache.axis2.context.MessageContext#getEnvelope()}.</li>
* <li>A test case for plain text messages would naturally represent the test message
* as a string.</li>
* </ul>
* Since defining a message representation that would be suitable for all clients, endpoints and test
* cases (and make their implementation simple) is impossible, a different approach has been chosen
* in the framework. Every client, endpoint or test case implementation chooses the Java type that is
* considers as best suited to represent the message. When invoking the test client, a test case
* uses a {@link org.apache.axis2.transport.testkit.message.MessageEncoder} to transform the message
* from its own representation to the representation used by the test client. In the same way,
* a {@link org.apache.axis2.transport.testkit.message.MessageDecoder} is used to transform the message
* intercepted by the endpoint (in the one-way case) or the response message received by the test client
* (in the request-response case).
* <p>
* [TODO: currently message encoders and decoders are chosen at compile time and the transformation is
* is invoked indirectly by adapters; this will change in the future so that encoders and decoders are
* selected dynamically at runtime]
*
* <h3>Exclusion rules</h3>
*
* Sometimes it is necessary to exclude particular test cases (or entire groups of test cases) from the
* test suite generated by the test kit. There are various reasons why one would do that:
* <ul>
* <li>A test case fails because of some known issue in the transport. In that case it should be excluded
* until the issue is fixed. This is necessary to distinguish this type of failure from regressions.
* In general the tests checked in to source control should always succeed unless there is a regression.</li>
* <li>Sometimes a particular test case doesn't make sense for a given transport. For example a test
* case that checks that the transport is able to handle large payloads would not be applicable
* to the UDP transport which has a message size limitation.</li>
* <li>The test suite builder generates test cases by computing all possible combinations of MEPs, content types,
* clients, endpoints and environment setups. For some transports this results in a very high number of test
* cases. Since these test cases generally have a high degree of overlap, one can use exclusion rules
* to reduce the number of test cases to a more reasonable value.</li>
* </ul>
* The test kit allows to specify exclusion rules using LDAP filter expressions. It takes advantage of the
* fact that each test case has a set of key-value pairs used to build the test case name. The LDAP filters
* are evaluated against this set.
* For example, {@link org.apache.axis2.transport.testkit.TransportTestSuiteBuilder} defines the following
* default exclusion rule:
*
* <pre>(&amp;(client=*)(endpoint=*)(!(|(client=axis)(endpoint=axis))))</pre>
*
* This rule excludes all test cases that would use a non Axis2 client and a non Axis2 endpoint.
*
* <h3>Logging</h3>
*
* Transport test cases generally involve several interacting components and some of these components
* may use multithreading. Also experience has shown that some test cases may randomly fail (often with
* a failure probablity highly dependent on the execution platform) because of subtle problems in the
* transport under test or in the tests themselves. All this can make debugging extremely difficult.
* To simplify this task, the test kit collects (or provides the necessary infrastructure to collect)
* a maximum of information during the execution of each test case.
* <p>
* The collected information is written to a set of log files managed by
* {@link org.apache.axis2.transport.testkit.util.TestKitLogManager}. An instance is added automatically to
* the resource set of every test case and other resources can acquire a reference through the dependency
* injection mechanism described above. This is the recommended approach. Alternatively, the log manager
* can be used as a singleton through {@link org.apache.axis2.transport.testkit.util.TestKitLogManager#INSTANCE}.
* <p>
* Logs files are written to subdirectories of <tt>target/testkit-logs</tt>. The directory structure has
* a two level hierarchy identifying the test class (by its fully qualified name) and the test case
* (by its ID). It should be noted that the test results themselves (in particular the exception in case
* of failure) are still written to the standard JUnit/Surefire logs and that these logs should be
* consulted first. The test kit specific log files are only meant to provide additional information.
* <p>
* Each test case at least produces a <tt>01-debug.log</tt> file with the messages that were logged
* (using JCL) at level DEBUG during the execution of the test case. In addition, depending on the
* components involved in the test, the test kit will produce the following logs (<tt>XX</tt>
* denotes a sequence number which is generated automatically):
* <dl>
* <dt><tt>XX-formatter.log</tt></dt>
* <dt><tt>XX-builder.log</tt></dt>
* <dd><p>These files are produced when Axis2 test clients and endpoints are used.
* <tt>XX-formatter.log</tt> will contain the payload of an incoming message as seen by the
* {@link org.apache.axis2.transport.MessageFormatter}. <tt>XX-builder.log</tt> on the other
* hand will contain the payload of an outgoing message as produced by the
* {@link org.apache.axis2.builder.Builder}. Note that the number of log files depends on
* serveral factors, such as the MEP, whether the client or endpoint is Axis2 based or not and
* whether the transport chooses to use message builders and formatters or not.</p>
* <p>These files provides extremely valuable information since it is very difficult to get this
* data using other debugging techniques. Note that the files are created by
* {@link org.apache.axis2.transport.testkit.axis2.LogAspect} which relies on Aspect/J to
* intercept calls to message formatters and builders. This will only work if the tests are
* run with the Aspect/J weaver.</p></dd>
* <dt><tt>XX-service-parameters.log</tt></dt>
* <dd><p>If the test case uses an Axis2 based endpoint, this file will contain the parameters
* of the {@link org.apache.axis2.description.AxisService} implementing this endpoint.
* This information is useful since the service configuration is in general determined
* by different components involved in the test.</p></dd>
* </dl>
*/
package org.apache.axis2.transport.testkit;