Friday, January 28, 2005

Too Much Test Automation?

            There was a time when testing software was synonymous with manual testing.  Now with the rise of test development and the advent of unit testing, automation is becoming more and more prevalent.  Test automation has many benefits but it is not a silver bullet.  It has its costs and is not always the right answer.  Software development organizations must be careful not to let the pendulum swing too far in the direction of automation.  Automating everything is a way to guarantee you miss many bugs.  It is incumbent upon those in charge of test organizations to find a balance between automated and manual testing.

            It is best to start out with a definition.  Test automation is code or script which executes tests without human intervention.  Simple automation will merely exercise the various features of the product.  More advanced automation will verify that the right actions took place.  Results of these tests are stored in log files or databases where they can be rolled up into reports listing a pass/fail rate.

            Automation has many advantages.  Manual testing is expensive.  It requires people to click buttons and observe results.  This isn’t terribly expensive the first time through but the costs stay fixed as the product progresses.  If you have a daily build (as you should), you pay the cost daily.  This quickly adds up.  Automation is expensive to create but the incremental cost is very cheap.  Automation is also consistent.  No one will ever forget to run a test case with automation.

            Because it is cheap, automation can be run in places where manual testing cannot.  Extensive tests can be run on daily builds.  These can even be run in the wee hours of the morning before everyone shows up for the day.  The inexpensive nature of automated testing is what allows unit tests and test driven development to be possible.

            Some test cases cannot easily be done manually.  When testing an API such as the DirectShow API I often work with, there is no UI to drive it.  At least some minimal coding must be done to expose the API to a user.  In these cases, automation is the obvious choice.

            Despite its advantages, automation is not a panacea.  First, it is not free.  Automation is expensive to create.  If you are able to amortize that cost over a lot of runs, the incidental cost becomes low.  On the other hand, if the test is something that will only be run a few times, automation may be more expensive than manual testing.  Decision makers must consider the high initial cost before committing their organization to automated tests.

            Secondly, automation is limited in scope.  After you have run your automated tests for the first time, you are done finding new bugs.  Never again will you find a new issue.  You might catch a regression, but if you missed the fact that clicking a particular combination of buttons causes the app to crash, you’ll never find that issue.  This is true no matter how many times you run the automation.  On the other hand, high quality manual testers will take the opportunity to explore corner cases.  In doing this, they will find many issues that would otherwise go unnoticed until real users find them after the product is shipped.  This lack of exploratory ability is, in my mind, the Achilles heel of automated testing. 

            The third drawback of automation is that there are things that simply cannot be automated well.  I work in the field of audio-video playback.  Trying to automate video quality tests is hard.  It is very expensive—specific media must be pared with specific test cases.  If you have to account for variable content (say, testing television tuners) or if the algorithm is not fixed (varying video cards or decoders), the task becomes even harder.  Using something like PSNR is possible but is not well tuned to the human visual system and thus is susceptible to false positives and false negatives.  Sometimes there is no replacing a person staring at the screen or listening to the speakers.

            Other problems also exist with automated tests.  Bugs in an automated test may mask a bug in the product and go unnoticed for long periods of time.  Automated testing does not emulate real users using a product nearly as well as a real person.  Manual tests can be brought online more quickly than automated tests allowing bugs to be found sooner.

            What then, is a test manager to do?  Manual testing is too expensive but automated testing is imperfect.  It is imperative that the two sorts of testing are mixed.  Automated testing should be used to verify basic quality but cannot be used exclusively.  There need to be real people simulating the experience of real users.  With the hiring of developers and not scripters as test developers, the quality of automated tests can be made very high.  Unit testing can ensure that automated tests are available early in the cycle.  With the increased sophistication of test developers and automation techniques, automated testing can take a larger role than in the past.  It cannot, however, replace real testers.   


  1. You've really done a good job of describing the main issues.

    I am a big proponent of intelligent use of automation. And I suspec that there are many people out there calling their huge automation libraries a success just because of their size.


  2.    One of my favorite TV shows right now is House.  In it a brilliant but antisocial doctor...

  3. This is the third in my series on test harnesses. In this post, I'll talk about systems that do much...

  4. PingBack from

  5. It seems that many test automation efforts around the industry tend to focus on GUI automation, or automating

  6. Testing started simply. Developers would run their code after they wrote it to make sure it worked. When

  7. PingBack from