Saturday, April 7, 2007

Breaking Down the Test/Dev Barrier

Similar to the solution I consider in my post on single-function roles, the Braidy Tester has a provocative new post entitled, "Let's Go Bust Some Silos" wherein he asks what happens when you get rid of the test/dev silos and have everyone work together under one roof.  Instead of having one group write dev code and not test it much and another team who wrote tests but didn't understand it much, have a single team which writes and tests fluidly.

One of his commenters, Jim Bullock, raises some good concerns about such an arrangement.  Is it really possible for someone focused on the creation to be objective enough to test?  Do we need someone tasked with breaking things to get over the tendency to see ones own success?  He also suggests that having test/dev silos help different sorts of personalities get along.  There might be some merit to this.  On a team where test and dev are integrated, a non-coding tester is likely to be treated as inferior.  On a test-only team, their value is probably more highly regarded.  This need not be so, but too often it is.

In considering this question, it is interesting to consider why we have separate testing teams to begin with.  How did they come about?  At one point all you had were developers.  They tested their own stuff.  Later, independent testers came into the picture and organized on independent teams.  Why was that?  If anyone was there for this formation or knows of books which speak of it, let me know.

I suspect that this happened because developers were found too busy to give sufficient time over to testing.  The desire for more features made it hard to take the time to test.  In a day without unit tests, the time it took to test was very large.  Stopping to do so might mean missing a deadline.  Based on that, it makes sense to bring in someone whose whole time is taken up by testing.  There is no pressure for that person to add features at the expense of testing.  Today, however, when we have automated tests and unit tests which can give us a lost of testing without a lot of time, do we still need this separate role?

3 comments:

  1. We don't need a role for testers in the same way that we don't need a role for cardiologists and neurosurgeons... In other words, we do need it if we have a high standard of performance.
    Specialization arises from the specialized requirements for high performance at certain tasks. I'm a programmer, but I've spent my career studying testing and honing my testing skills. Most of my income comes either from teaching testing or being an expert witness tester.
    Take away testing specialization, and you will simply remove any pressure that people doing testing might otherwise feel to study their discipline. Agilists (capital "A") almost universally don't study testing (I'm aware of maybe two exceptions. Antony Marcano is one of those). They use the word testing, but talk to them and you'll hear about test tools, not testing skills. I mean, Steve, few testers who are experienced with automation would argue that automation replaces testers. Automation merely expands the grasp of testing, not its reach. You really think unit tests will save you? Have you only worked with simple systems?
    A counterargument you have is that most testers don't, in fact, study testing. It's a field full of perpetual amateurs. We are awash in shallow, cynically written textbooks. So, perhaps banishing testing as a specialty will have little impact on real projects.
    Another counterargument would be that software quality standards are generally low, anyway. The world is used to cell phones, appliances, web sites, etc., that don't work very well. So, my argument about high performance may be moot.
    These are not bad arguments, necessarily. I just wonder whether you really understand what you give up by subordinating testing to programming (combining the two activities will lead to studying programming and not testing, I predict). I'm wondering this because I know that in 20 years of trying to find the edges of testing skill, and to fully understand and embody excellent testing, I think there's a lot more left for me to experience and figure out.
    Anyone can be good at testing without a smidgen of training. But if you wish to be fantastic at it, you have to train. BTW, we're holding a testing competition at the next CAST conference. Come and try your test tools against live human brains, if you dare.

    ReplyDelete
  2. James, great comments.  I generally agree with your position.  If you look over my blog, you'll find that I've written often against the idea of too much automation.  It serves its purposes, but there is a lot of room for someone who really understands the feature and is probing it deliberately.  Once you write some automation and run it the first time, you're done finding new bugs.  The same is not true of a tester.  Every time he looks at the product, there is a chance he'll find a new bug.
    The trick is finding the balance.  Like many things in life, this is a pendulum that constantly swings.  We swing from no manual testing to lots of it and back.

    ReplyDelete
  3. I think that having developers test their own features is an inherent conflict of interest. Developers are primarily paid to produce features of shippable quality. If they are also in control of the process that assesses whether a feature has met its quality goal then there will be an ever present temptation to skimp on the testing, especially during crunch periods. Having separate testers and developers is just a recognition that we are all human.
    FWIW, I see feature testing as separate from code testing. Code testing is something that can be safely delegated to developers and often manifests itself in terms of unit testing, and sometimes in automated product testing. Investing in code testing is what good developers do to make themselves more efficient when adding new features and refactoring existing code.
    Feature testing, on the other hand, is all about testing whether the product is in a suitable state to be released to a customer. This involves not only knowing what the product is supposed to do and how customers are likely to use it but also where bugs are likely to lurk. A good feature tester will understand the product's internal architecture to a reasonable level of detail and will be able to target test cases toward known brittle areas or code that has undergone significant change or classes that are being worked on by less experienced members of the team etc.
    However, no matter how much a tester enjoys testing there is still something inherently satisfying about writing code that ships to customers. While I think it does make sense for testers to occasionally be developers (and vice versa), I just don't think they should be one and the same for a given feature.

    ReplyDelete