Saturday, May 27, 2006

Making Hard Choices - Which Bugs Do We Fix?

We just shipped Vista Beta 2 and we are now on the march toward shipping what we call RCs or Release Candidates.  During this time, we are forced to make many hard choices about what bugs to fix and what bugs we can live with.  There are those who believe that software should ship without bugs but those people have probably never tried to ship a product of any significant size.  There will be bugs in software, just like there are bugs in a new house or a new car.  The important thing is to make sure there are no important bugs. 


Eric Sink is a software developer at SourceGear and I recently ran across an article by him about shipping with bugs.  Eric makes a few points I think worth repeating.  He says there are 4 questions that must be asked:


1) How bad is its impact? (Severity)

2) How often does it happen? (Frequency)

3) How much effort is required to fix it? (Cost)

4) What is the risk of fixing it? (Risk)


The first two determine whether the bug is worth fixing at all.  If the bug doesn't pass muster there, you don't even look at the 3rd and 4th.  A bug being easy to fix is never a reason to take the fix late in a product cycle.  Just because it is a single line doesn't mean we should do it.  There is risk with all fixes.  I've seen the easiest ones go wrong.


The second two can be reasons to reject even righteous bugs.  If the bug is bad and happens often but is very hard or very risky, it still might not be worth fixing.  This is a hard balancing act.  Sometimes the bug is bad enough that you delay the product.  Other times it is possible to live with it.  Making that choice is never a fun one.  As owners of a product, we want all bugs to be easy to fix and have low risk.  Unfortunately, that isn't always the case and when it isn't, hard choices have to be made.


I recommend you read the whole article.  Apparently the one I found is based on a longer article which can be found here.

Friday, May 26, 2006

May Monthly Podcast Update

Another month, another list of podcasts.  A few new ones this month, nothing really fell off of the list.  I think I'm about at my limit.  My commute isn't long enough to support any more than this.


Here is what I listened to regularly this month:


Major Nelson - Covers the world of the XBox360.  News, reviews, interviews with insiders.  I'm not sure there is enough new going on in the world of the 360 to keep my interest forever but for now it stays on the list.


This Week in Tech - Leo Laporte hosts a roundtable discussion about the technology news of the day.  Lots of talk about the Intel Mac this month.


Security Now - Very informative show about the world of computer security.  Cryptography series ended this month and several shows covered more general topics.


The Dice Tower - Best of the board gaming podcasts.  Reviews, news, and top ten lists.  Good place to find new games.  One of the co-hosts, Joe Steadman, left the show this month.  It remains to be seen if the new show will retain the same spark.  I hope that it does.


FLOSS Weekly - Interviews in the world of open source software. 


This Week in Media - Roundtable discussion about digital media creation, distribution, etc.  Part political, part technical.  I'm learning a lot.


HDTV Podcast - Not a lot of depth but good coverage of the news in the world of HDTV.  The whole show is less than 1/2 hour so it's easy to squeeze in.


ExtremeTech Podcast - Yet another round table, this one covering PC technology.  Covers things like vertical coolers, new processors, etc.  A little HDTV is thrown in too.  Twenty minutes long.


Honorable mention:


Engadget - Not something I regularly listen to but their joint coverage of E3 was solid.  Podcasts right after the Sony and Nintendo/Microsoft press conferences plus a more general E3 show.


As always, if you have suggestions of media and/or technology podcasts, send them my way.  I am especially looking for podcasts on project management/programming issues.

Thursday, May 18, 2006

Real Estate Finally Goes 2.0

Just a few weeks ago I was searching some real estate listings and it struck me how poor the mapping experience was. It is still very much a "web 1.0" experience. Everything on the page is static. Mapping tools have come a long way in the past few years. It is about time that someone take that technology into the real estate space. It appears that John L. Scott--a real estate company in the Northwestern U.S.--has done just that. It is the first real estate page I've seen utilize "web 2.0" maps.


Several years back online real estate search systems added mapping. At first you only got a high-level view which couldn't be changed. For a while now you've been able to zoom in and scroll around. After using Virtual Earth for a while now, however, the old interface feels very constraining. The zooming and scrolling present in most real estate portals is similar to that provided by MapQuest. You have to click on buttons to zoom and to scroll and these actions are thus much less organic. It is much more natural to use the scroll wheel to zoom and to scroll around by clicking and dragging.


Yesterday morning my wife IM'd me and told me that John L. Scott was using Virtual Earth. Indeed they are and it is such a big improvement, I can't imagine using anything else. Not only can I zoom and scroll without reloading the whole page and through a natural interface, but I can also get an aerial view. Even better, when there is a bird's-eye view available, that too is offered. Imagine being able to see a close-up perspective view of your potential new house. This is so far from the line-drawn map view of old. In my mind at least, this differentiates them from their competition by a substantial margin.


For an example of the old vs. the new approach, check out this randomly selected house on Coldwell Banker and then on John L. Scott. On each page, enter MLS# 26061922 and then select Map Property.


(For the record, I have no affiliation with John L. Scott.)

Wednesday, May 17, 2006

Testing Systems

This is the third in my series on test harnesses. In this post, I'll talk about systems that do much more than simple test harnesses. Test harnesses provide a framework for writing and executing test cases. Test harnesses focus on the actual execution of test cases. For complex testing tasks, this is insufficient. A test system can enhance a test harness by supplying a back end for results tracking and a mechanism to automatically run the tests across multiple machines.


A test harness provides a lot of functionality to make writing tests easier, but it still requires a lot of manual work to run them. Typically a test application will contains dozens or even hundreds of test cases. When run, it will automatically execute all of them. It still requires a person to set up the system under test, copy the executable and any support files to the system, execute them, record results, and then analyze those results. A test system can be used to automate these tasks.


The most basic service provided by a testing system is that of a database. The test harness will log test results to a database instead of (or in addition to) a file on the drive. The advantages to having a database to track test results are numerous. The results can be compared over time. The results of multiple machines running the same tests can be combined. The aggregate pass/fail rate can be easily determined. An advanced system might even have the ability to send mail or otherwise alert users when a complete set of tests is finished or when certain tests fail.


It is imperative that any database used to track testing data have good reporting capabilities. The first database I was forced to use to track test results was, unfortunately, not strong in the reporting area. It was easy to log results to the database, but trying to mine for information later was very difficult. You basically had to write your own ASP page which made your own SQL calls to the database and did you own analysis. I lovingly called this system the "black hole of data." A good system has a query builder built in (probably on a web page) which lets users get at any data they want without the necessity of knowing the database schema and the subtleties of SQL. The data mining needs to go beyond simple pass/fail results. It is often interesting to see data grouped by a piece of hardware on a machine or a particular OS. The querying mechanism needs to be flexible enough to handle pivoting on many different fields.


Another feature often provided by a test system is the ability to automatically run tests across a pool of machines. For this to work, there is a specified set of machines set aside for use by the testing system. Upon a specified event, the test system invokes the test harness on specific machines where they execute the tests and record the results back to the database. These triggering events might be a specified time, the readiness of a build of the software, or simply a person manually scheduling a test.


Part of this distributed testing feature is preparing the machines for testing. This may involve restoring a drive image, copying down the test binaries and any supporting files they might need, and conducting setup tasks. Setup tasks might be setting registry entries, registering files, mapping drives, and installing drivers. After the tests are run, the testing system will execute tasks to clean up and restore the machine to a state ready to run more tests.


Having a testing system with these capabilities can be invaluable on a large project. It can be used to automate what we at Microsoft call BVTs or Build Verification Tests. These are tests that are run at each build and verify basic functionality before more extensive manual testing is done. Through the automation of distributed testing, substantial time can be saved setting up machines and executing tests. People can spend more time analyzing results and investigating failures instead of executing tests.


It is important that I note here the downside of testing systems. Once you have a full-featured testing system, it is tempting to try to automate everything. Rather than spending money having humans run tests, it is possible to just use the test system to run everything. This is fine to a point but beyond that, it is dangerous. It is very easy to get carried away and automate everything. This has two downsides. First, it means you'll miss bugs. Remember, once you have run your automated tests the first time, you will never, ever find a new bug. You may find a regression, but if you missed a bug, you'll never find it. As I've discussed before, it is imperative to have someone manually exploring a feature. Second, it means that your testers will not develop a solid understanding of the feature and thus will be less able to find bugs and help with investigation. When a system is overly automated, testers tend to spend all of their time working with the test system and not with the product. This is a prescription for disaster.


When used properly, a good test harness coupled with a good test system can save subtantial development time, improve the amount of test coverage you are able to do in a given period of time, and make understanding your results much easier. When used poorly, they can lull you into a false sense of security.

Friday, May 12, 2006

I used a Dremel to upgrade my PC

As you might recall, about 5 months ago I built a pretty cool Media Center PC.  I have been experiencing some times where the lip sync is off.  This seems to be something in the recording because if I play it back, the lip sync is off every time.  Stopping and restarting doesn't fix the problem.  I suspect it was the very old tuner I had in the system.  The solution:  get a new tuner.  As the PCIe bus is supposed to be the wave of the future, I decided to get a PowerColor Theater 550 Pro PCIe.  This tuner is based off of the ATI 550 Pro chipset which gets the best reviews.  The card was even on sale at Newegg for like $65.  It's since dropped $5 and now comes with free shipping--shoulda waited.  The card arrived yesterday and all seemed well.  Then, I went to install it.


When I tried to put the card in my computer, I noticed a problem.  The Zalman heat sink on the north bridge got in the way.  There are 2 PCIe x1 slots and the north bridge was positioned in such a manner as to obstruct *both* of them.  Despite the fan noise, I considered going back to the fan the north bridge came with but that was too tall as well.  What to do?


The solution:  cut off enough heat sink fins to allow the card to fit.  I broke out my Dremel tool, outfitted it with a cutting blade and went to town.  Now, sporting about 1/4 fewer blades than before, the heat sink fits under the PCIe card and all is well.  The card is in the machine and working.  Here's to hoping that my av sink problems go away.


After this fiasco, I started looking around.  Most motherboards out today have similarly situated x1 PCIe slots.  Almost everything I looked at either had the north bridge in the way or, worse, had the slots right next to the x16 display card slot where it would interfere with air flow and oversized fans.  To date not much is shipping in the x1 PCIe form factor.  With the current support, that trend will probably continue for a while.

Saturday, May 6, 2006

Advanced Test Harness Features


In a past post I talked about the basic functionality of a test harness.  That is, it should be something that provides a reusable framework for an application, for running and reporting the results of test cases.  There is much more that a test harness can do, however.  It can provide mechanisms for lightweight test development, model based testing, and scripting.


 


Basic test harnesses like the Shell98 I spoke of last time or cppunit are what I would call heavyweight harnesses.  By that I mean that the testing code is statically bound to the harness at compile time.  In the most basic form the harness comes in the form of source code that is compiled along with the test code.  A slightly more advanced model involves a separate library that is statically linked with the test code to form a standalone executable.  This is fine but it means longer compile times, larger binaries, and potentially less flexibility.


 


There is a better way to do this.  The test harness and test cases can be separated.  The harness is compiled into an executable and the test cases are loaded by it dynamically.  I call this a lightweight harness.  As the harness no longer knows what test cases it will be tasked with executing, this requires that the tests are discoverable in some manner by the test harness.  The test cases are usually collected in dlls, jar files, or assemblies which are loaded by the harness.  The harness uses reflection (C# or Java) or a custom interface to discover which tests are in the test file.  This system is much more complex than the static binding but it offers several advantages.  It separates the test cases from the harness, allowing them to be varied independently.  It decreases compile times and reduces binary size.  It can also allow a single instance of a test harness to run many different types of tests.  With a static model, this scenario would require running many different executables.  Nunit is an example of a lightweight test harness.


 


Another feature that advanced test harnesses will have is support for model based testing.  Model based testing is a method of testing where test cases are generated automatically by the test framework based on a well-defined finite state machine.  A good description can be found on Nihit Kaul’s blog.  A test harness which supports model based testing will provide mechanisms for defining states and state transitions (actions) which it will use to generate and execute test cases.  The harness will also need to support a mechanism for verifying that the system is still correct after each transition.  Setting up model based testing usually requires a lot of work up front.  They payoff can be quite high though.


 


In a simple test harness, a test case is not provided any context in which to run.  Imagine the test cases as functions which take no parameters.  They will run exactly the same each time they are executed.  An advanced test harness will provide a mechanism to modify the test cases via parameters or scripting.  The simple method is to allow parameters to be passed to the test cases via a configuration file.  This is useful when you want several tests which vary only on a single parameter.  I wished for this feature when I was testing DVD playback.  In order to test the performance of our DVD navigator on different discs, I needed test cases to play different chapters.  I was forced to create a test case per chapter I wanted to play.  It would have been much simpler to write one test case and then allow a parameter which supplied the chapter to play.  Some harnesses might even provide a full scripting model where you could programmatically call test cases and provide parameters.  It is easy to envision embedding VB, Python, or even Lisp into a test harness and using that to control the test case execution.  The advantages to both of these methods are the ability to easily vary the test coverage without being required to program.  This makes test case creation accessible to non-programmers and saves programmers a lot of time over compilation.


 


Very advanced test harnesses provide mechanisms to execute the tests on a pool of machines.  We might call these test systems rather than test harnesses.  I’ll discuss these in another post.


 

Friday, May 5, 2006

Cryptography Podcast Series

   Security Now just concluded an amazing podcast series on the subject of cryptography.  Steve Gibson does an excellent job of making the world of crypto accessible.  No serious math is needed to understand it.  He covers everything from stream ciphers, to block ciphers, to public key cryptography and certificates.  If you have any interest in the subject, check out this six-part series:


Intro to Crypto


One Time Pads


Block Ciphers


Public Key Crypto


Cryptographic Hashes


Primes and Certificates


If you want to learn more, there are two books I recommend:


Crypto by Steven Levy - A modern history of crypto.  Levy is a great author.


Applied Cryptography by Bruce Schneier - If you want to learn how this stuff really works.  You'll need some math for this book.