Tuesday, February 27, 2007

Programming, Bridges, and ... the Halting Problem?

Jt Gleason contends that building software is not like building bridges because of the halting problem.  He describes a situation where you build bridges but random things can go wrong.  The bridge works fine for a VW but not for a Volvo.  Sometimes two cars cross the bridge and end up at different destinations.  Is this really what programming is like?  Sometimes it is.  The advent of memory protection helped a little bit but it is still possible for something like a malformed string copy to cause a program to crash minutes later in totally unrelated code.  From the perspective of a programmer, this can seem just about as mystifying as having a bridge work for one car but not two.  Add multiple threads to the mix and things can get very strange.


Jt seems to blame this seeming randomness and the ensuing difficulty it causes programmers on the halting problem.  The halting problem basically says that we cannot create a program which for all programs can determine if they will end or go into an endless loop.  Jt says that because of this "we can never be certain of any result about any computation."  That's not really true.  The halting problem doesn't say we can never know if any program will end, but rather that we can't know if all programs will end.  In constrained situations, we can most certainly tell deterministically what the behavior will be. 


There is some truth here though.  The more complex the program, the closer it resembles a nondeterministic Turing machine and the harder it is to predict the outcome.  When a program is small, we can keep it in our heads and walk down each path.  As it gets bigger, the number of paths expands exponentially and the length of those paths goes up.  In short, it becomes impossible to fully comprehend. 


This has implications for unit testing.  Unit tests can verify that small pieces of functionality work as intended.  This is like proving the halting of a small program.  However, when we take a bunch of proven smaller pieces and put them together, the end result is not necessarily what we intended.  There can be emergent behaviors we would have thought impossible.  The halting problem begins to rear its ugly head.


This inability to fully comprehend or even really predict all the behaviors of a large program is part of what makes programming so different from bridge building.  I also contend that the fact that computers are unforgiving plays a big part as does the exploratory nature of most software development.  It's more akin to searching for the cure to a disease than to making yet another suspension bridge. 


update:  Here's some discussion of this post over at reddit.

1 comment:

  1. >Jt says that because of this "we can never be certain of any result about any computation."
    I probably should have been a bit more careful with the verbiage, but I didn't want to get too technical. Obviously, we can prove simple things can halt. As I mentioned in the comments on Reddit, I meant "any sufficiently complex computation" but I figured that was a given if I was going to bring the halting problem into it. *grin*
    I really like your analogy of programming as searching for a cure for a disease. Unit tests are like the clinical trial to prove the drug actually does something. I think there is a lot more in that analogy too.
    Thanks for your thoughts on the matter!

    ReplyDelete