Testing your tests for value

Oct 15, 2014

I’d like to start this week’s article with a show of hands. Who gets excited by the prospect of software testing? Really excited? Put your hands up. Let’s face it, we all accept that testing has to be done, but its hard to become motivated about actually doing it. Tests get created and run, but the question we’d like to pose is whether the best possible tests have been created. We often put a spotlight on good code and software architecture patterns for quality, but how often do we turn that spotlight on our tests. A test is a test is a test? Right? Well, no. How do we know that the tests we create are the best?

So what makes a good test? If we think about the purpose of testing, it is ultimately to verify that we have a good product. That it provides the intended value for our customers. And the value to our customers is provided by the product’s features. So a good test is one that ensures that the features work. However, there’s more.

The ability of a product to provide the desired response to a request depends on what else is going on in its environment. You can’t just roll out a car and test its features in isolation and say it works. You have to know it works in all the different conditions that could come up in its operating environment, the outdoors. So for software, a good test is one that tests features for a range of conditions that might come up in its environment. These conditions might include databases not being available, or data not satisfying certain dependencies, or performance issues from other traffic being serviced at the same time.

Thinking of these conditions as testing layers that we want to verify, there is the Feature layer, that focusses on delivering value to the end user, the system layer that provides supporting functions for the feature layer (for example logical units of work to support the end to end feature) and then the environment that these reside in.

The challenge that most projects have is how to create good tests, which consider all three layers together:

• Developer written and automated unit and end to end tests tend to be success and failure scenarios focussed mainly at the system layer, because that’s the level at which the developer writes code. They may handle specific environment conditions such as resources not being present, but they won’t cover all of these, for example performance and scalability concerns.

• Functional tests tend to be feature based, considering feature layer success and failure scenarios. Designed well, these provide a high degree of confidence that the features are operating as expected, but they don’t focus on system or environment concerns. Its like testing how a car performs when it has a full tank of gas and everything has been maintained correctly on a sunny day only.

• Performance and scalability tests aim to test a sample of feature success scenarios while concentrating on specific environment concerns.

For the most part, each single test phase is hitting two out of three layers. And within many projects, the person that tests unit and end to end tests is different from the person that writes and runs the functional tests, who is different again from the person running the performance tests. Each uses information created from other teams, but often there is not enough detail or timeliness to help with the process of creating tests. So how can this be improved?

The answer is communication, sharing knowledge and traceability. It might sound an anti-climax that there is not a radical shift of testing approach, but the first part of the solution is simply awareness that tests may not be covering all that you need them to. Look at who is responsible for what stages of testing and who they are communicating with when creating those tests.

The second part of the solution is encouraging greater knowledge sharing of the three layers, so that it can be used in as many test stages as possible. Developers often need finer grained business rules than are present in requirements to help them write tests and in turn, they are essentially creating system and environment level requirements while they code. So capturing these in a standard format provides information that teams can use to create better tests.

The third part is verifying that the finer grained business requirements used for testing, along with system and environment requirements are traced through to tests. It can seem time consuming to do this, but can be delegated to the person creating the tests and ultimately is the only way to know that you have coverage.

In summary, testing is often seen as drawing on the output of information created for other purposes in a project. In order to give greater confidence of code coverage, what we’d like to suggest here is that actually testing needs its own dedicated information, and needs that recognition by other disciplines of the project. After all good tests are the best way to verify that you have a good product!

To learn more about about how these test strategies might work for your specific team and working environment, please Contact Us. And sign up for our newsletter to stay informed of new articles, news and events occurring for Inviting Futures.

 
 

To receive more free, regular productivity related tips, subscribe now!