As most of my blog posts do, this one stems from a conversation I had recently. The conversation revolved around whether all test cases should be independent or if it was acceptable to have one rely upon another. It is my contention that not only are dependent test cases acceptable, but that they are desirable in many circumstances.
There are two basic ways to structure test cases. The most common sort is what I will term “Independent.” These test cases are self-contained. Each test case does required all setup, testing, and cleanup. If we were testing the Video Mixing Renderer (VMR), an independent test case would create the playback graph, configure the VMR, stream some video, verify that the right video had been played, and tear down the graph. The next test case would repeat each step, but configure the VMR differently.
The second sort of test case is what I will call a “Dependent” test case. This test case carries out only those actions required to actually test the API. All other work to set up the state of the system is done either by the test harness or by a previous test case. For an example, assume we are testing the DirectShow DVD Navigator. The test harness might create the graph. Test case 1 might start playback. Test case 2 might navigate to the right chapter and title. Test case 3 might then run a test that relies on the content in that location. When all is done, the harness tears down the graph and cleans up. Test case 3 relies upon the harness and the test cases before it. It cannot be run without them.
Some will argue that all test cases should be independent. At first glance, this makes a lot of sense. You can run them in any order. They can be distributed across many machines. You never have to worry about one test case interfering with the next. Why would you ever want to take on the baggage of dependent cases?
There are at least two circumstances where dependent test cases are preferable. They can be used to create scriptable tests and they can be much more efficient.
Most test harnesses allow the user to specify a list of test cases to run. This often takes the form of a text or an xml file. Some of the better harnesses even allow a user to specify this list via the UI. Assuming that the list is executed in the specified order (not true of some harnesses), well-factored test cases can be combined to create new tests. The hard work of programming test cases can be leveraged easily into new tests with a mere text editor or a few clicks in a UI.
Independent test cases are not capable of being used this way. Because they contain setup, test, and cleanup, the order they are run in irrelevant. This can be an advantage in some circumstances, but it also means that once you are done coding, you are done gaining benefit from the work. You cannot leverage that work into further coverage without returning to the code/compile cycle which is much more expensive than merely adding test cases to a text file.
Let’s return to the DVD example. If test cases are written to jump to different titles and chapters, to select different buttons, to play for different times, etc., they can be strung together to create a nearly infinite matrix of tests. Just using the test harness, one can define a series of test cases to test different DVDs or to explore various areas of any given DVD. I created a system like this and we were able to create repro cases or regression cases without any programming. This allowed us to quickly respond to issues and spend our energy adding new cases elsewhere. If the DVD tests were written as independent test cases, we would have had to write each repro or regression case in C++ which would take substantially longer. Additionally, because the scripts could be created in the test harness, even testers without C++ skills could write new tests.
Dependent test cases can also be more efficient. When testing a large system like the Windows Vista operating system, time is of the essence. If you want to release a build every day and to do so early enough for people to test it, you need BVTs (build verification tests) that complete in a timely manner. If the time for setup and cleanup is substantial, doing it for each test case will add up. In this case, doing it only once for each test run saves that time.
Dependent test cases work best when the system under test is a state machine. In that instance, setup becomes more complex and factoring good test cases becomes easier.
Dependent test cases are not the answer to all questions. They probably aren’t even the answer to most questions. A majority of the time, independent test cases are best. Their ability to be scheduled without reliance upon other cases makes them more flexible. However, dependent test cases are an important technique to have in your toolbox. In some circumstances, they can be a substantially better solution.
May I ask what does "test harnesses" mean? I'm a foreigner...ReplyDelete
I agree with you that "Dependent" and "Independent”test cases are both needed in the test design. I'm doing totally black box testing. When designing test cases, steps to each case are needed. How to design steps to those dependent cases? and also how to record transcripts for these dependent ones?You know, the next case is relevant with the former one...seems a little confusing...Will you give me an example? thanks. - StellaReplyDelete
A test harness is an application that makes running tests easier. It usually handles logging and executing test cases. I will try to post about the basics here shortly.ReplyDelete
I'm not sure I fully understand what you are asking. Are you asking how to record that test case A is dependent on test case B? If so, there is no one right answer. The best way is to name the cases in such a way that someone familiar with the spec or the technology will understand how they work. The way to ensure that others call them in the right order is to create scripts that call them in the correct order. Create scripts for each of the interesting scenarios.
&nbsp;&nbsp; A short while back one of my readers asked what a test harness was.&nbsp; I will answer...ReplyDelete