Friday, September 14, 2007

Test Developers Shouldn't Execute Tests

This view puts me outside the mainstream of the testing community but I feel strongly that test teams need to be divided into those that write the tests (test developers) and those that execute them (testers).  Let me be clear up front.  I don't mean that test developers should *never* execute their tests.  Rather, I mean that they should not be responsible for executing them on a regular basis.  It is more efficient for a test team to create a wall between these two responsibilities than to try to merge them together.  There are several reasons why creating two roles is better than one role. 

If you require that test developers spend more than, say, 15% of their time installing machines, executing tests, etc. you'll find a whole class of people that don't want to be test developers.  Right or wrong, there is a perception that development is "cooler" than test development.  Too much non-programming work feeds this perception and increases the likelihood that people will run off to development.  This becomes a problem because in many realms, we've already solved the easy test development problems.  If your developers are writing unit tests, even more of the easy work is already done.  What remains are trying to automate the hard things.  How do you ensure that the right audio was played?  How do you know the right alpha blending took place?  How do you simulate the interaction with hardware or networked devices?  Are there better ways to measure performance and pinpoint what is causing the issues?  These and many more are the problems facing test development today.  It takes a lot of intellectual horsepower and training to solve these problems.  If you can't hire and retain the best talent, you'll be unable to accomplish your goals.

Even if you could get the right people on the team and keep them happy, it would be a bad idea to conflate the two roles.  Writing code requires long periods of uninterrupted concentration.  It takes a long while to load everything into one's head, build a mental map of the problem space, and envision a solution.  Once all of that is done, it takes time to get it all input to the compiler.  If there are interruptions in the middle to reproduce a bug, run a smoke test, or execute tests, the process is reset.  The mind has to page out the mental map.  When a programmer is interrupted for 10 minutes, he doesn't lose 10 minutes of work.  He loses an hour of productivity (numbers may vary but you get the picture).  The more uninterrupted time a programmer has, the more he will be able to accomplish.  Executing tests is usually an interrupt-driven activity and so is intrinsically at odds with the requirements to get programming done.

Next is the problem of time.  Most items can be tested manually in much less time than they take to test automatically.  If someone is not sandboxed to write automation, there is pressure to get the testing done quickly and thus run the tests by hand.  The more time spent executing tests, the less time spent writing automation which leads to the requirement that more time is spent running manual tests which leads to less time...  It's a tailspin which can be hard to get out of.

Finally, we can talk about specialization.  The more specialized people become, the more efficient they are at their tasks.  Asking someone to do many disparate things inevitably means that they are less efficient at any one of them.  Jack of all trades.  Master of none.  This is the world we create when we ask the same individuals to both write tests and execute them repeatedly.  They are not granted the time to become proficient programmers nor do they spend the time to become efficient testers.

The objections to this systems are usually two-fold.  First, this system creates ivory towers for test developers.  No.  It creates a different job description.  Test developers are not "too good" to run tests, it is just more efficient for the organization if they do not.  When circumstances demand it, they should still be called upon for short periods of time to stop coding and get their hands dirty.  Second, this system allows test developers to become disconnected from the state of the product.  That is a viable concern.  The mitigation is to make sure that they still have a stake in what is going on.  Have them triage the automated test failures.  Ensure that there are regular meetings and communications between the test developers and the testers.  Encourage the testers to drive test development and not the other way around.

I know there are a lot of test managers in the world who disagree with what I'm advocating here.  I'd love to hear what you have to say.


  1. PingBack from

  2. Steve,
    I'm a little confused as to why this is an issue. This is perhaps because there is important context missing from your post?
    1) Question: Can you give an example of when a test developer would write code for an automated test that needs to be invoked or executed by a human and it cannot or should not be invoked by another automated system (e.g. a continuous integration server). I'm used to creating tests that are invoked automatically in an environment that can be deployed to automatically so that manual effort is spent on investigating test-failures, exploratory testing or on things that can't be automated. I don't want skilled testers doing repetitive tasks like kicking off automated test-runs.
    2) Question: Why do you feel specialisation in being a test-developer is required rather than just being a developer?
    3) Remark:
    I agree, however, that the best person to develop an automated tests is not always the best person to perform manual tests... But, sometimes, it is best for the person doing the manual tests to develop aspects of the automation... esp. if you are automating actions that facilitate manual testing (e.g. computer assisted exploratory testing).

  3. The more the software discipline evolves, the more humor I find in the fact that our practices seem to start contradicting older practices which seem natural and intuitive. Let me explain…
    I had an interesting read called - Test Developers Shoul..

  4. So say you have a team of 5 specialized test developers and you want to implement a team makeup like what you suggest, do you fire some of the test developers and replace them with software testers?

  5. @Antony - Automated test code should be executed by an automated system but systems still require humans to maintain them.  Someone has to troubleshoot hardware issues, network issues, etc.  Some one still has to triage the results.
    You ask a very good question about what a test developer does that is different.  I'll address that in a new post.
    Good point about test automation.  I don't intend to limit testers from also being also to automate tests.  They should certainly be empowered to automate away the parts of their job that a computer can do well.
    @Test Guy - Who is running the tests today?  If the team is happy with how things work, keep it going.  As people leave, consider increasing the specializiation.

  6. In my post, Test Developers Shouldn't Execute Tests , Antony Marcano asked if we actually need test