My team recently finished what we call a “bug bash.” That is, a period of time where we tell all of the test developers to put down their compilers and simply play with the product. Usually a bug bash lasts a few days. This particular one was 2 days long. We often make a competition out of it and track bug opened numbers across the team with bragging rights or even prizes for those who come out on the top of the list.
Bug bashes are a time when everyone on the team is asked to spend all of their time conducting exploratory testing. Sometimes managers will influence the direction by assigning people end-user scenarios or features to look at. Other times the team is just let go and told to explore wherever they desire. Experience has shown me that some direction can be good. Assigning people to explore an area they don’t usually work on gets new eyes on the product and with new eyes come new use patterns and new bugs. Recently I’ve also discovered that it can be helpful to track where people have spent their time. During our last bug bash we created a list of areas that should be explored and had people sign off when they had investigated them. This gives us a much better sense of just what the coverage looked like and allows us to ensure all areas received attention.
Conducting a bug bash can be expensive. There is a lot of work to get done and putting everything else aside for 2 days adds up to a lot of other work getting pushed off. Why do we do this? What is the return on the investment? There are three primary reasons that come to mind:
We have found that empirically, a bug bash flushes out a lot of bugs in a short period of time. Our most recent bug bash saw the number of bugs opened jump to 400% of the daily average. This is important because we frontload the finding of the bugs. The earlier we know about bugs, the more likely we are to be able to fix them. Knowing about more bugs also helps us make more informed triage decisions.
The second reason we conduct bug bashes is because they are likely to find bugs on the seams. Test automation can only find certain kinds of bugs. Exploratory testing is a much better way to find issues on the seams—where functional units join up. Sometimes these bugs are the most critical. Imagine if we could have found the Win7 MP3 bug or the interaction between playing audio and network throughput before shipping the respective products. These are the sort of issues highly unlikely to be found in test automation but which can be found through exploratory testing. We obviously don’t find all such issues through bug bashes, but we do find a lot.
The final reason we run bug bashes is to get a sense of the product. Most of the time we spend our days focused on one small part of the operating system or another. It’s hard to get a sense for the state of the forest while staring at individual trees. After spending several days conducting exploratory tests on the product, we can get a much better sense whether the overall product is doing well or if there are serious issues.
I agree with your experience, I've been running and recommending Bug Hunts for the past 6 or 7 years and I always find that even if Management is usually skeptic at the beginning of the process they are bought over by the value the Organization gains with these activities.
I'd add 2 things to your observations:
1. My experience with hunts is to do them as a Pair-Testing exercise, and we involve the product developers in it as well. This brings up bugs and scenarios that are seldom achieved by testers working independently.
2. An additional added value to these activities is that the Organization gains a better perspective about what testing is about and we tend to gain more respect about what we do and how we do it.
I wrote some additional pointers to the stuff I've doing in here: http://blog.practitest.com/2008/05/what-do-you-pack-when-you-go-for-bug.html
In any case, its good to see that other people believe in these unconventional approaches. Maybe in time they will also become mainstream :)
Thanks for the additional Joel. I hadn't considered doing these as a joint exercise between test and dev. That would be very interesting.ReplyDelete
Do you analyse the bugs found during a bug bash to find out why they were't caught by 'normal' testing ?ReplyDelete
>Do you analyse the bugs found during a bugReplyDelete
>bash to find out why they were't caught by >'normal' testing ?
this is a good question.
something i've seen during bug bashes is that some people will use the opportunity to finally log some annoyances that have been a gripe for a while.
all sorts of little grudges can come out.
plus -- it's a great chance to tear apart the work of that programmer who annoys you the most ;-)
and definitely involving the whole team gives devs a better understanding of testing. (not a complete understanding, but... a better one than we had before)
We (SAP Portal Platform) call that "QA Days", and like Joel, we usually have Dev with us as well.
One problem with such events is the mass of duplicate bugs, of course we narrow that down by assigning different areas / scenarios to different testers but still we see a lot of duplications which requires thorough investigation of the duplication before we file the right bugs in the system.
How do you tackle that?