Monday, April 21, 2008

Know That Which You Test

Someone recently related to me his experience using the new Microsoft Robotics Studio.  He loaded it up and proceeded through one of the tutorials.  To make sure he understood, he typed everything in instead of cutting and pasting the sample code.  After doing so, he compiled and ran the results.  It worked!  It did exactly what it was supposed to.  The only problem--he didn't understand anything he had typed.  He went through the process of typing in the lines of code, but didn't understand what they really meant.  Sometimes testers do the same thing.  It is easy to "test" something without actually understanding it.  Doing so is dangerous.  It lulls us into a false sense of security.  We think we've done a good job testing the product when in reality we've only scratched the surface.

Being a good tester requires understanding not just the language we're writing the tests in, but also what is going on under the covers.  Black-box testing can be useful, but without a sense of what is happening inside, testing can only be very naive.  Without breaking the surface, it is nearly impossible to understand what the equivalency classes are.  It is hard to find the corner cases or the places where errors are most likely to happen.  It's also very easy to miss a critical path because it wasn't apparent from the API.

There are three practices which help to remedy this.  First, program in the same language as whatever is being tested.  A person writing tests written in C# against a COM interface will have a hard time beginning to understand the infrastructure beneath the interface.  It can also be difficult to understand the frailties of a language different than the one being coded in.  Each language has different weaknesses.  Thinking about the weaknesses of C++ will blind a person to the weaknesses of Perl.  Second, use code coverage data to help guide testing.  Examining code coverage reports can help uncover places that have been missed.  If possible, measure coverage against each test case.  Validate that each new case adds to the coverage.  If it doesn't, the case is probably covering the same equivalency class as another test.  Third, and perhaps most importantly, become familiar with the code being testing.  Read the code.  Read the specs.  Talk to the developers. 

3 comments:

  1. Link to Robotics Studio leads to error page.

    ReplyDelete
  2. More info. Link works in Internet Explorer, but leads to error page in Firefox.

    ReplyDelete
  3. <i>It is easy to "test" something without actually understanding it.  Doing so is dangerous.  It lulls us into a false sense of security.  We think we've done a good job testing the product when in reality we've only scratched the surface.</i>
    This is a reasonable caution as far as it goes, but it's worthwhile to examine it critically.  It's easy to "understand" something without testing your understanding of it.  It's easy to believe that you understand something when you don't really understand it. It's also easy to think that you <i>don't</i> understand something when understanding might be just one experiment away.
    So instead of thinking about understanding and testing as being static and separate, it might be more useful as things that we learn and develop in parallel.  As we learn more, we test better, and as we test better, we learn more.
    Cheers,
    ---Michael B.

    ReplyDelete