Wednesday, September 28, 2011

Pruning the Decision Tree in Test

Yesterday I wrote about the need to reduce the number of things a project attempted to do in order to deliver a great product.  Too many seemingly good ideas can make a product late or fragmented or both.  The same is true of testing a product.  Great testing is more about deciding what not to test than deciding what to test.

There is never enough time to test everything about a product.  This isn’t just the fault of marketing which has a go-to-market date in mind.  It is a physical reality.  To thoroughly test a product requires traversing the entire state tree in each possible combination.  This is analogous to the traveling salesman problem and is thus NP-Complete.  In layman’s terms, this means that there is not enough time to test everything for any non-trivial program.

When someone first starts testing, thinking up test cases is hard.  We often ask potential hires to test something like a telephone or a pop machine.  We are looking for creativity.  Can they think up a lot of interesting test cases?  After some time in the field, however, most people are able to think up a lot more tests than they have time to carry out.  The question then becomes no longer one of inclusion, but one of exclusion.

In Netflix’s case the exclusion was for focus.  This is not the right exclusion criteria for testing.  It is improper to not test the UI so that you can test the backing database.  Instead, the criteria by which tests should be excluded is more complex.  There is no single criteria or set of criteria that work for every project.  Here are some to consider which have wide applicability:

  • Breadth of coverage – Often times it is best to try everything a little rather than some things very deep and others not at all.  Don’t get caught up testing just one part.
  • Scenario coverage – Look for test cases which will intersect the primary use patterns of the users.  If no one is likely to try to put a square inside a circle inside a square, finding a but in it is not highly valuable.
  • Risk analysis – What areas of the product would be most problematic if they went wrong?  Losing user data is almost always really bad.  Drawing a line one pixel off often is not.  If you have to choose, prefer focusing more on the data than the drawing.  Another important area for many projects are legal or regulatory requirements.  If you have these, make sure to test for them.  It doesn’t matter how well your product works if the customer is not allowed to buy it.
  • Cost of servicing – If forced to choose, spend more time on the portions that will be more difficult or costly to service if a bug shows up in the field.  For instance, in a client-server architecture, it is usually easier to service the server because it is in one spot, under your control, rather than trying to go to hundreds or thousands of computers to update the client software.
  • Testing cost – While not a good criteria to use by itself, if a test is too expensive to carry out or to automate, perhaps it should be skipped in favor of writing many more tests that are much cheaper. 
  • Incremental gains – How much does this test case add to existing coverage?  It is better to try something wholly new than another slight variation on an existing case.  Thinking in terms of code coverage may help here.  It is usually better to write a case which tests 10 new blocks than one which tests 15 already covered blocked and 2 new ones.  It is very possible that two test cases are both great, but the combination is not.  Choose one.

There are many more criteria that could be used.  The important point is to have criteria and to make intentional decisions.  A test planning approach that merely says, “What are the ways we can test this product?” is insufficient.  It will generate too many test cases, some of which will never be carried out due to time or cost.  It is important to prune the decision tree up front so that the most important cases are done and the least important ones are left behind.  Do this up front, in the test spec, not on the fly as resources dwindle.

3 comments:

  1. Speaking of servicing:
    * Always test your auto-update functionality (and make sure it's bulletproof)!
    * Testing your failure/error reporting is almost equally important

    ReplyDelete
  2. Indeed a very nice post. I do lot of netsurfing for finding the key information on trees and forests. I just came across your blog and has subscribed with a wish that you will be posting good posts like this over the coming days. Thank You.

    ReplyDelete
  3. very useful blog. Makes lot of sense in practicality.

    ReplyDelete