Wednesday, June 4, 2008

Test For Failure, Not Success

We recently went through a round of test spec reviews on my team.  Having read a good number of test specs in a short period of time, I came to a realization.  It is imperative to know the failure condition in order to write a good test case.  This is at least as important if not more important than understanding what success looks like.

Too often I saw a test case described by calling out what it would do, but not listing or even implying what the failure would look like.  If a case cannot fail, passing has no meaning.  I might see a case such as (simplified): "call API to sort 1000 pictures by date."  Great.  How is the test going to determine whether the sort took place correctly?

The problem is even more acute in stress or performance cases.  A case such as "push buttons on this UI for 3 days" isn't likely to fail.  Sure, the UI could fault, but what if it doesn't?  What sort of failure is the author intending to find?  Slow reaction time?  Resource leaks?  Drawing issues?  Without calling these out, the test case could be implemented in a manner where failure will never occur.  It won't be paying attention to the right state.  The UI could run slow and the automation not notice.  How slow is too slow anyway?  The tester would feel comfortable that she had covered the stress scenario but in reality, the test adds no new knowledge about the quality of the product. 

Another example:  "Measure the CPU usage when doing X."  This isn't a test case.  There is no failure condition.  Unless there is a threshold over which a failure is recorded, it is merely collecting data.  Data without context is of little value.

When coming up with test cases, whether writing them down in a test spec or immediately when writing or executing them, consider the failure condition.  Knowing what success looks like is insufficient.  It must also be possible to enumerate what failure looks like.  Only when testing for the failure condition and not finding it does a passing result gain value.

3 comments:

  1. I'm not sure I got your point. Any test case description should contain "Initial conditions, testing steps, expected results" information.
    Without these details I don't see how you can execute them (except if writer implies that tester will know it thanks to his product/environment knowledge.)
    From this information, associated to acceptance criteria, it's not so difficult to define failure conditions.

    ReplyDelete
  2. I'm saying think about the failure conditions first, then work backward to the steps.  It may seem obvious, but a lot of people don't do that.

    ReplyDelete
  3. As a general rule in testing, if it doesn't meet the expected results in any way, it's a failure. Why try and define fifty failure conditions, when all you really care about is whether it meets the expected the results? Your expected results should be written clearly enough so that it's unambiguous.
    As for tests such as "Measure the CPU usage when doing X.", the point there is to see what the CPU usage is, and then determine whether it is acceptable and/or how much effort you are willing to put into improving the outcome.
    Exploratory testing still has its place - when it's more time/cost efficient to run a quick test and then analyze the result when you're checking something that was not considered at all in the requirements phase.

    ReplyDelete