HCL OneTest provides a suite of products that enables organizations to improve their software quality; finding problems earlier in their delivery lifecycle; the suite enables testing of different aspects including UI, API, service virtualization and performance. So how do we test effectively and keep teams agile by not burdening them with many tests that need maintenance? Read on for insights into how we are working to make test maintenance easier with a unified, next generation approach.
The unified approach starts with the concept that everyone is going to find life easier with a single product versus one for each aspect. So, we plan to create a new product that brings together the capabilities that customers value from the existing products into one consistent framework. This enables a user to easily perform UI, API, and performance tests, as well as service virtualization, within a single product.
The emphasis is on simple, which means these necessarily complicated products cannot just be slammed together, as the result would be too complicated to comprehend. Therefore, the focus will be on the pieces that work well together. Any missing pieces can still be found in our existing products which will continue to be supported.
If a test breaks when do you fix it? Now or never! To assess this question, you need a metric for quality that is not “Number of Tests”, otherwise deleting a test will have reduced your quality.
The goal is always to get high-quality software into production. This goal should only be delayed when it reduces failures in production. If a test slows the goal without reducing the failures, it’s probably not worth maintaining.
To assist with gauging the value of a test the product needs to provide some metrics about the test, time to run is the easy one, code coverage in the system under test a more complicated one.
Tests are often written after the code, which is something test-driven development seeks to change. Code-first happens because it is the easier place to start. After all, tests will fail as the system under test is unavailable. However, with service virtualization integrated we can run tests offline to demonstrate how tests will behave before the system under test is available. Tests written first will validate behaviour rather than implementation so will endure longer, requiring less maintenance.
Strategies that make maintenance schedulable rather than requiring immediate action can be built-in:
- UI tests typically interact with controls on a web page, as designs evolve tests can break because those controls can no longer be located. However, when multiple strategies are defined to find controls, should their positions change then those objects can still be found using, for example, labels.
- Performance tests reproduce at scale how end users interact with the system under test by simulating the client implementation. Therefore, when the client changes, the performance test is broken. If the client implementation details used by performance tests are embedded into UI tests, then verification of client implementation details becomes continuous.
Focus is on getting feedback to users as fast as possible so that the implications of any change can be understood before the user switches tasks.
It’s very easy to make the mistake of creating a new test by copying an existing one and tweaking it a bit. Who hasn’t done this? But todays quick win is tomorrow’s maintenance headache. This can be minimized with:
- Reusable sequences of steps, so you can construct new tests from already working pieces.
- Reusable messages, so that when your schema changes you change a few messages rather than many, many tests.
- Data-driven, because sometimes another test really can be just another row in a spreadsheet.
Hopefully you will agree that this possible future looks exciting, and as we continue with the evolution of our HCL OneTest product suite we are very interested in feedback, ideas or pain-points, you wish to share at our ideas portal.