I recently attended a talk advocating combining the management of test development and development. Some of the reasoning for this was to force the quality decisions to fall onto one person. This makes a lot of sense. It is important to understand why. Many times in large software development projects we compartmentalize the various roles. This is true not just of test and development but also roles like performance, security, customer service, etc. This compartmentalization has a direct effect on how people operate. When someone is responsible for only one aspect of a product, they will often make the right choices for their aspect but the wrong ones for the product overall.
In the traditional software model, test and development are two distinct silos. The disciplines report to different people and perform different jobs. This creates tension between the roles. This tension arises not just because of the variance of roles but also because of the variance of purpose. Development wants to add features and test wants to constrain them. To ship a quality product, you need to strike a balance. Too many features and the quality will be too low. People won't tolerate it. Too much quality and there won't be enough features to attract customers. Imagine a field. One one side is quality and the other features. Now imagine a line drawn between them. To increase quality, you must decrease features. Each decision is a tradeoff.
Security is also an area that is fraught with tradeoffs. The Windows of old shows what can happen if not enough attention is paid to security. Trusted Solaris shows what happens if you pay too much. Not enough attention and the system becomes a haven for viruses, bots, etc. Too much and the system is years behind, runs slow, and is very hard to use.
Performance can be similar. Many changes that increase performance are intrusive. Making them involves trading stability for performance. Other times the performance wins are not visible to the end user. Are they still worth making? Finally, if you are not allowed to degrade performance, it is very hard to add new features. Assuming your previous implementation was not poorly designed, it can be nigh unto impossible to add functionality without increasing CPU usage.
In each of these cases--and many others--a person or team tasked with improving only one side of the coin will make decisions that are bad for the product. Recall that good engineering is about making the right tradeoffs. To make them, one must consider both what is to be gained and what is to be lost. When we give someone a role of focusing solely on security or performance or adding features, we skew their decisions. We implicitly make one side of the coin trump the other. If a person's review is based only on the performance improvements they made in the product, that person will be disinclined to care about how important the new functionality is. If they are tasked solely with securing a product, they will tend not to consider the functionality they break when plugging a potential hole.
The right decisions can only be made at the place where a person is accountable for both sides of the tradeoff. If the different silos (test, dev, security , performance) are too far separated, this place becomes upper management. This is dangerous because upper management often does not have the time to become involved nor do they have the understanding to make the right decision. Instead, it is better to drive that responsibility lower in the chain. Having engineering leads (not dev leads and test leads) as the talk advocated is one way to accomplish this. One person is responsible for the location of the quality line. Another way is to increase interaction between silos. Personal bonds can overcome a lot of process. Sharing responsibility can work wonders. Consider dividing the silos into virtual teams that cut horizontally across disciplines. Make those people responsible as a group for some part of the product. As is often the case, measuring the right metrics is half of success.
Hi Steve, I just happened to see your blog on the home page of blogs.msdn.com and wanted to respond...
ReplyDeleteWhile I dig test-driven development, I am not sure I espouse truly melding any of the classic roles into one person because the workload always suffers. I have seen devs take on the pm role while doing dev work and because they felt the pms pressure to ship, they sacrifice quality or coding time. Same with test. Same with pms going to do one of the other disciplines. If you are going to make someone do that, you have to give them more time and usually what I've seen, it's a crunch mode where people are pitching in to do two roles.
I've definitely seen the downside of duality where devs say: "I'm not responsible for finding bugs in my code, test is" and washing the hands of buggy code. Test is not the only owner of quality and writing good code is the fastest way to having that quality.
But I'm not sure that making the devs do their own test work here, helps, if all they care about shipping a new feature (buggy or not). You are either oriented toward quality, or you are not, and the role you happen to be in lets you do different things on its behalf.
Ideally the pm role balances the dev/test divide and creates a better communication channel for them, as well as assists with the load balancing. I agree the team as a whole should take on quality, but the pm should not be asleep during these debates. :)
--Betsy
I just finished reading Showstopper! by G. Pascal Zachary. It recounts the history of the creation of
ReplyDelete