WTP automated testing This document outlines the principles of WTP automated testing. It also provides some guidelines for creating and running test cases. If anyone has suggestions for this document, please post a message to the wtp-dev mailing list. |
Testing | ||
The goal of unit testing is to achieve:
The goal of performance testing is to achieve uniform or better performance as WTP moves forwards. As a developer, your dedication to WTP performance is strongly desired. To ensure the performance of WTP does not regress over time, developers should provide performance test cases along side with their features, etc. Developers are also expected to verify their bug fixes and features contribution against existing performance test cases. If something is not performing well, open a bug. Use performance as the keyword. Click here to see a list of all the currently open performance bugs in WTP. Eclipse has a performance infrastructure in place for measuring and tracking performance. The performance processes described in this document are modeled around the same infrastructure. To create and run tests under this infrastructure, please refer to the Eclipse Tests How-To document. Eclipse also has tips and tools to aid developers to debug and track down performance problems. They are listed here: Components may have their own testing requirements. For example, the server tools framework often call APIs on server extensions to retrieve data about the extension. Some of these APIs must be short running as they are called from the UI. The server tools framework provides abstract performance test cases that extensions should extend to verify that code contributed by the extension does not regress performance in the base framework. Performance requirements from component teams are listed in a document which is located in that component’s “development” directory in CVS. Please refer to the WTP Development Practices document regarding any development related issues. |
Creating junit test cases | ||
Here's a laundry list for integrating test plugins into the WTP build:
The Eclipse Tests How-To document has a very thorough explaination on how to create and run performance test cases using the Eclipse performance infrastructure. |
WTP performance process | ||
This section describes the process for tracking performance in WTP. It is based on the process used by the Eclipse Platform Project. All performance tests must be automated. Performance tests are run every week using Thursday's integration builds. Performance tests should:
If either condition fails, failures should be handled immediately according to the WTP Development Practices document. Performance results are store in a Cloudscape database and are compared against results from the previous release. In case of a regression, a note will be posted to the mailing list indicating the problem.
Performance results from the weekly integration build are rendered into a graph, and is linked to on the build page. This graph provides a simple comparison between the integration build and the reference build. To run performance tests for a build that's available from the Eclipse download Web site:
Running performance tests for a local build is similar. Go to your ${buildDirectory} directory and check the buildType, buildId and timestamp for your local build. Go through the same steps as above. If you have a Cloudscape database setup (refer to the Eclipse Tests How-To document), then the performance results will be written to the ${testDir}/results directory, else the performance results will be displayed in the console. |