Wednesday, December 19, 2007

What Is Test Automation?

I talk about it a lot, but I don't know that I've ever defined it.  A reader recently wrote in and asked what exactly this was.  I suppose that means I should give a better explanation of it.

Long ago in a galaxy far, far away, testers were computer-savvy non-programmers.  Their job was to use the product before customers did.  In doing so, they could find the bugs, report them to the developers, and get them fixed.  This was a happy world but it couldn't last.  Eventually companies started shipping things called SDKs which were Software Development Kits full of programming primitives for use by other programmers.  There were no buttons to click.  No input boxes to type the number 1 and the letter Q into.  How was a non-programmer supposed to test these?  Also, as companies shipped larger and larger products and these products built upon previous products, the number of buttons that needed pushing and input boxes that needed input grew and grew.  Trying to test these was like running on a treadmill turned up to 11.  The cost of testing grew as did the amount of time developers had to wait for the testers to give the green light to ship a product.  Something had to be done.

The solution:  Test Automation.

Test automation is simply an automatic way of doing what testers were doing before.  Test automation is a series of programs which call APIs or push buttons and then programmatically determine whether the right action took place.

In a simple form, test automation is just unit tests.  Call an API, make sure you get the right return result or that no exception is thrown.  However, the real world requires much more complex testing than that.  A return result is insufficient to determine true success.  A function saying it succeeded just means it isn't aware that it failed.  That's a good first step, but it is sort of the check engine light not being lit in the car.  If there is an awful knocking sound coming from under the hood, it still isn't a good idea to drive.  Likewise, it is important to use more than just the return value to verify success.  Most functions have a purpose.  That may be to sort a list, populate a database, or display a picture on the screen.  Good test automation will independently verify that this purpose was fulfilled.

Other advanced forms of test automation include measuring performance, stressing the system by executing functionality under load, and what we call "end to end" testing.  While unit tests and API tests treat methods and modules as discrete pieces and test them in isolation, end to end testing tries to simulate the experience of the user.  This means pressing the buttons in Windows Media Player to cause a DVD to play and then verifying that it is playing.  Sometimes this can be the most challenging part of testing.

Here's an example of something we had to automate.  Think about how you might approach this.  Windows Vista offers per-application volume.  It is possible to turn down the volume on your World of Warcraft game while leaving Windows Media Player playing loud.  To do this, right-click on the speaker icon in the lower-right-hand corner of your screen and select "Open Volume Mixer."  Moving the slider for an application down should cause its volume to attenuate (get quieter).  Testing this manually is easy.  Just play a sound, lower the volume, and listen.  Now try automating it.


  1. Hi,
    Thanks a lot for explaining this. But, don't you think SDE's solve greater engineering problems? Say I work on file system, the devs will tackle the challenge of creating something better than we use, but SDET's have the same task as they did for the previous file system, which is to make sure that it works well.
    Best Regards,

  2. Thatz a very good piece of information, Steve.

  3. @Hitesh, it really depends on what you are doing.  There are times, like creating a new file system, that development gets to do harder work.  How often is a new file system created though?  Usually it is incrementally improved.  Everyone concentrates on the times development gets to do new work, but the majority of their time is spent doing incremental improvements or just maintenance work.
    There are other times that test-dev gets to do the more interesting work.  When I was a test-dev working on DVD and video playback, I was involved in an effort to create what is called DirectX Video Accleration or DXVA.  During that process, I had the opportunity to implement an MPEG-2 decoder and accompanying video card driver that conformed to the new standard.  That was pretty exciting greenfield development.

  4. Hi Steve,
    Thanks a lot for the response. I must say that I'm starting to feel better about my SDET role. I will continue my enlightenment by following blogs of SDET's and talking to people who are in this role.
    Sir, I just love to code and the thought that what if I'm no longer able to do it terrifies me. But, I'm slowly understanding the "SDE" in "SDET". Please continue posting some great blog entries like this one.
    Thanks for your time and patience.
    Best Regards,

  5. Hello,
    I have one question in mind, "Who is a great SDET?". I have read the blog post by The Braidy Tester, but I have a few queries.
    What make a great SDET? Also who is viewed as a good SDET at MS?
    1) Who can finds bugs in the code like anything, but can't tell you what's wrong.
    2) One who finds bug through a methodical process, and thus doesn't find as many of them, but can tell what exactly is the problem. And even goes to debug the application and suggest a fix.
    3) One who develops creative tools to automate test procedures, some interesting tools that can replicate more scenarios and that too faster, but again, though the tool is greatly written, extensible, maintainable, good quality code, it some how doesn't catch that many bugs.
    4) One who has a hacker kind of attitude, who just wants to surrender software down! He sees that as his victory and is not bothered about automating the process, thinking how can you make it better or what went wrong.
    Who according to you makes a better SDET of them all? In the internal review of a MS employees, what things are taken in account for a SDET? Is it just how many bugs she found?
    I have more questions in mind, if you can answer these then I will put them forward.
    Thanks a lot for your support.
    Best Regards,

  6. Hi Steve,
    I remember when you reverse engineered the Fairchild com language and was able to replace the damaged and ancient HP1000 server with a fancy 386 PC running DOS 5.0 and a multi-channel RS232 port. I helped you run the cables to the testers. Remember the beltdriven 70MB harddrive the size of a washing machine and the 8.5" floppies. Unless there is a different Steve Rowe then my apologies. I credit you for getting me started in computers in the early 90's.
    Best Regards,
    Les W.

  7. @Les, sorry but there must be a different Steve Rowe out there.  In the early 90s I was in school.

  8. Hi Steve,
    I agree that autoamted tests is maybe the only way to keep quality in the product we develop today at a higher and higher pace. I've tried different test automation strategies. To my experience it must be the developers that automates the tests even on the higer level of system tests. Developing and maintaining test is even harder than developing the software itself sometimes. During the christams holiday I ran across a UI testing framework that I think has the right approach. It allows me to record and edit tests from inside Visual Studio. For the smoke tests I need in my current project it seems perfect. You can have a look at where they have a movie that shows how it works, it is only in beta (that you have to request by mail). I've also tried out TestComplete but I like better to be working in VS with the languages I like (read C# ;).
    Keep up the good work.