One of my favorite TV shows right now is House. In it a brilliant but antisocial doctor and his staff try to solve medical mysteries. If you don't watch it, you should. The writing is great. Earlier this season, there was an episode where a journalist hits his head and has slurred speech. The team is trying to diagnose this and having no luck. Toward the end, House tells them to go do the blood work again and "don't use a computer." When they look at the blood under a microscope, the cause becomes readily apparent. There are parasites in the blood. The patient has malaria.
Just like the computers examining the blood were not programmed to look for parasites, so too the software that we write to test a program is often not programmed to look for all of the potential failures. When we write test automation, we are focused on one thing. If testing a video renderer, the test will make sure that video output is correct. What if that output causes the UI to be distorted? What if it causes audio to glitch? The test wasn't programmed to look for that.
About a year ago I first visited the concept of test automation. In that article I gave several reasons why the best testing must be a mix of both manual testing and test automation. The idea that all tests should be automated continues to pervade the industry. It is thought that testers are expensive and automation is cheap. Over the long haul, that may be true. It is especially true when projects are in sustaining mode that you want primarily automated tests. However, when developing a new product, relying solely on automated tested can be disastrous. In addition to the issues I talked about in my last post, there is a danger I didn't discuss. That is the danger of missing something obvious but unforseen.
Before diving in, let me get some definitions out of the way. Manual testing is just that: manual. It involves a human being interacting with the program and observing the results. Test automation is the use of a programming language to drive the program and automatically determine whether the right actions are taking place.
In addition to the higher cost and thus higher latency of automated testing, it is also possible that automated testing will just miss things. Sometimes really obvious things. Just like the computers in House that missed the parasites, so too will test automation miss things. Just recently I came across a bug that I'm convinced no amount of automation would ever find. The issue was this: while playing a CD, pressing next song caused the volume to maximize. To a human, this jumped out. To a computer designed to test CD playback, all would seem normal. The next song did indeed play. Even a sophisticated program that knew what each chapter sounds like would probably not notice that the volume was too high. A programmer would specifically have to go looking for this.
There are a near-infinite amount of things that can go wrong in software. Side effects are common. Automation cannot catch all of these. Each one has to be specifically programmed for. At some point, the returns diminish and the test is set in stone. Any bugs which lie outside that circle will never be found. At least, not until the program is shipped and people try to actually use it.
The moral of this story: never rely solely on automation. It is costly to have people look at your product but it is even costlier to miss something. You have to fix it late in the process--perhaps after you ship--which is really expensive. You lose credibility which is even more expensive. Deciding the mix of manual and automated testing is a balancing act. If you go too far in either direction, you'll fall.
Being married to a nurse practioner, I can tell you second hand that while House has great writers, they're not always based in medical facts. I hear that Greys Anatomy is one of the best shows in terms of sticking to medical fact, rather than spounting mumbo jumbo that isn't quite accurate. :-)
ReplyDeleteThe usual cry from managers is "50% BVT automation, 80% code coverage, 99% cranium up rectum". MSFT is a place of managers and butt kissing these days. We have too much emphasis on statistics and less on reality.
ReplyDeleteWhat do you propose people who own solely an API do? Fire up a debugger and start banging at the intermediate window?
ReplyDeleteBill, obviously different products call for different balance points. If all you own is an API with no user interaction, you probably default to mostly automated testing. Most APIs are still used by people somewhere. Let us say you own a compiler. In this case, you will be writing mostly automated cases. However, in that case, you "manual" cases are using the feature to develop software. That is where you'll find the corner cases. Most products shipped by most companies interact with users somewhere. When/where they do, make sure you have someone really using it.
ReplyDeleteBut thats not the danger of test automation, but inadequate coverage :-) I bet, when the work get monotonous the human misses the obvious bug whereas the automation will catch it without any problem.
ReplyDeleteThe basic is we should know what to automate and what not to. The user experince should never be automated - which is the scenario you have described!
Here is a comment I received via e-mail in reference to my post on the Dangers of Test Automation. ...
ReplyDeleteThis is the third in my series on test harnesses. In this post, I'll talk about systems that do much...
ReplyDeleteTesting started simply. Developers would run their code after they wrote it to make sure it worked. When
ReplyDeletePingBack from http://woodtvstand.info/story.php?id=7415
ReplyDeletePingBack from http://uniformstores.info/story.php?id=2505
ReplyDelete