Monday, February 28, 2005

Why building software isn’t like building bridges

I was having a conversation with a friend the other night and we came across the age-old “software should be like building buildings” argument.  It goes something like this:  Software should be more like other forms of engineering like bridges or buildings.  Those, it is argued, are more mature engineering practices.  If software engineering were more like them, programs would be more stable and projects would come in more on time.  This analogy is flawed.

Before I begin, I must state that I’ve never engineered buildings or bridges before.  I’m sure I’ll make some statements that are incorrect.  Feel free to tell me so in the comments section.

First, making software, at least systems software, is nothing like making buildings.  Engineering a bridge does not involve reinventing the wheel each time.  While there may be some new usage of old principles, there isn’t a lot of research involved.  The problem space is well understood and the solutions are usually already known.  On the other hand, software engineering, by its very nature, is new every time.  If I want two bridges, I need to engineer and build two bridges.  If I want two copies of Windows XP, I only engineer and build it once.  I can then make infinite perfect copies.  Because of this software engineering is more R&D than traditional engineering.  Research is expected to have false starts, to fail and backtrack.  Research cannot be put on a strict time-line.  We cannot know for certain that we’ll find the cure for cancer by March 18, 2005.

Second, the fault tolerances for buildings are higher than for software.  More often than not, placing one rivet or one brick a fraction off won’t cause the building to collapse.  On the other hand, a buffer overflow of even a single byte could allow for a system to be exploited.  Buildings are not built flawlessly.  Not even small ones.  I have a friend who has a large brick fireplace inside their room rather than outside the house because the builders were wrong when they built it.  In large buildings, there are often lots of small things wrong.  Wall panels don’t line up perfectly and are patched over, walls are not square to each other, etc.  These are acceptable problems.  Software is expected to be perfect.  In software, small errors are magnified.  It only takes one null pointer to crash a program or a small memory leak to bring a system to its knees.  In building skyscrapers, small errors are painted over.

Third, software engineering is incredibly complex—even compared to building bridges and skyscrapers.  The Linux kernel alone has 5.7 million lines of code.  Windows 98 had 18 million lines of code.  Windows XP reportedly has 40 million lines of code.  By contrast, the Chrysler building has 391,881 rivets and 3.8 million bricks.

Finally, it is a myth that bridge and building engineering projects come in on time. One has to look no further than Boston's [thanks Mike] Big Dig project to see that.  Software development often takes longer and costs more than expected.  This is not a desirable situation and we, as software engineers, should do what we can to improve our track record.  The point is that we are not unique in this failing.

It is incorrect to compare software development to bridge building.  Bridge building is not as perfect as software engineers like to think it is and software development is not as simple as we might want it to be.  This isn’t to excuse the failings of software projects.  We can and must explore new approaches like unit tests, code reviews, threat models, and scrum (to name a few).  It is to say that we shouldn’t ever expect predictability from what is essentially an R&D process.  Software development is always doing that which has not been done before.  As such, it probably will never reliably be delivered on time, on budget, and defect free.  We must improve where we can but hold the bar at a realistic level so we know when we've succeeded.

Thursday, February 24, 2005

Recommended Books On Testing

Another question I was asked via e-mail.  "Do you know of any good books that explain the testing process in detail? I noticed you mentioned Debugging Applications for .Net and Windows, but I am looking for a book that really explains the basics of 'I have this API/application and I want to test it'. "

Let me say up front that I’m not a big fan of testing books.  Most of what they say is obvious and they are often long-winded.  You can learn to test more quickly by doing than by reading.  Unlike programming, the barrier to entry is low and exploration is the best teacher.  That said, Cem Kaner’s Testing Computer Software gets a lot of good marks.  I’m not a huge fan of it (perahps I'll blog on my disagreements later) but if you want to speak the language of the testing world, this is probably the book to read.  Managing the Testing Process by Rex Black gives a good overview of the testing process from a management perspective.  Most testing books are very process-intensive.  They teach process over technique.  How to Break Software by James Whittaker is more practical.  I have read some of the book and have heard James Whittaker speak.  As the title indicates, the intent of the book is to explain where software is most vulnerable and the techniques it takes to break it at those points.

Part of the difficulty with testing books is that there are so many kinds of testing.  Testing a web service is fundamentally different than testing a GUI application like Microsoft Word which is again wholly different than testing a multimedia API like DirectShow.  Approaches to testing differ also.  Some people have a lot of manual tests.  Some automate everything with testing tools like SilkRunner or Visual Test.  Others write code by hand to accomplish their testing.  The latter is what my team does.  Most books on testing will either distill this down to the basics--at which time you have no real meat left--or they will teach you a little about everything and not much about anything.  Read the 3 books I call out above but make sure to adapt everything they say to your specific situation.  Treat them as food for though, not instruction manuals.

Do you have a favorite testing book that I haven't mentioned?  Share your knowledge with others in the comments.

Wednesday, February 23, 2005

FCC Broadcast Flag has a bad day

In 2003 the FCC ruled that digital television signals could carry a flag which, when set, required specific protections be in place in all hardware/software that handled the signal.  This flag has become known as the broadcast flag.  Recently, several members of the D.C. appeals court heard a lawsuit seeking to block the ruling.  At least two of the judges had unkind words for the FCC.  The flag is currently scheduled to go into effect this July.

News.com:

"You're out there in the whole world, regulating. Are washing machines next?" asked Judge Harry Edwards. Quipped Judge David Sentelle: "You can't regulate washing machines. You can't rule the world."

Reuters:

You crossed the line," Judge Harry Edwards told a lawyer for the Federal Communications Commission during arguments before a three-judge panel of the U.S. Court of Appeals for the D.C. Circuit.

"Selling televisions is not what the FCC is in the business of," Edwards said, siding with critics who charge the rule dictates how computers and other devices should work.

Tuesday, February 22, 2005

What is a Test Architect?

I was asked a few questions via mail.  Here is the first of some quick answers to these:

   What is the role of a Test Architect?  There is not a single definition of the test architect role.  A test architect is an advanced test developer whose scope is larger and who solves harder problems than the average SDE/T.  The specifics of what they do varies greatly.  They might be solving hard, particular problems.  They might be the one called in to solve the seemingly intractable issue.  They may be designing a new test platform.  Or, they may be determining group test policy.  Any and all of these fall in the scope of the test architect.  The work they do is often similar to that of an SDE/T.  The difference is often one of scope.  An SDE/T will often own a specific technology or be responsible for implementing a part of a system.  A test architect will own the approach to testing or developing entire systems.

   Are you a test architect or do you have a different idea of what one is?  Please leave your opinions in the comments section.  I'd love to see a dialog develop on this subject.  It's something I'm interested in but that isn't all that well understood yet in most quarters.

Thursday, February 3, 2005

TDD - First Impressions

      I spent the last two days in a class covering test driven development, unit testing, and refactoring.  I hope to provide a more detailed discussion of what I learned at some later point but for now I thought I'd post my initial impressions.   I've read a lot about TDD, Unit Testing, Refactoring, etc. but I'd never actually *done* test driven development.  The class had several hands-on exercises.  We were using C# and NUnit.  Let me say right here that NUnit is a slick app.  It is unobtrusive which is exactly what you want in a test harness.  This is the report of that experience.

      First off, it felt really strange.  Generally when developing a program, you think about the big picture first and work your way to the details.  With TDD, you end up taking the opposite tack.  Because you have to write a test, fail the test, then write the code, you cannot start with the big parts, instead you start with the small internals.  Of course it is necessary to put some thought into the bigger picture, but very quickly you are driven into the implementation details and that creates a feedback loop for your big picture design.  This feels strange but you become accustomed to it.  The designs that come out seem clean.  Forcing testability causes you to think about cohesion, coupling, redundancy, etc.

      When you are doing test driven development, you are constantly switching between the editor, compiler, and test harness.  You are compiling often.  This is a good thing.  It means that your code is always working.  You don't go on long coding sprees followed by long bug fixing sprees.  Instead, you intermingle the two a lot.  I find it easier to fix an issue as I'm writing it rather than later when I compile some larger chunk of code.

      TDD feels good.  Every time you run the unit tests, you get positive feedback that things are working.  When you make a change, even a radical change, you know that if all the test cases pass that everything is working.  It gives you peace of mind that you cannot get in older coding models.  If you've never done any test driven development, give it a try.  Who knows, you might like it.

      The class was taught by Net Objectives.  This is the 3rd class I've taken from them.  I've also attended a few of their free seminars.  If you have an interest in OO or Agile techniques, check them out.  I highly recommend their work.