Monday, July 30, 2007

More Resources For Teaching Children To Program

As I've stated before, I have a young son who I have been trying on and off to teach to program.  I still owe you a post talking about what I've found does and does not work.  In the mean time, I ran across this article which give several good resources.  It mentions many of the standbys including Lego Mindstorms, Logo, Scratch, and Basic but also a few I haven't looked into before.

Friday, July 27, 2007

How To Automate UI Testing

Most software has a user interface.  That means that most test teams spend time testing that interface.  The easiest way to do this is to just click on all the buttons and make sure the right thing happens.  This works, but it doesn't scale.  Eventually you look at the hordes of testers clicking buttons and decide that there has to be a better way.  Enter UI automation.  There are plenty of tools and toolkits to automate UI.  These range from tools like VisualTest to toolkits like Microsoft UI Automation for WPF.  These tools make automating the interface easy...for a while.  After some time you realize that you are spending all of your time updating the test scripts to reflect UI changes and very little time actually testing the product.

The difficulty is that UI is easy to change and the changes often come very late in the product.  This means that the UI is constantly changing.  For every change of the UI, the tests have to be updated.  The truth is that poking the UI buttons--either manually or automatically--doesn't really work.  They are both leave you running on a treadmill doing the same thing over and over and over.  Is there a better way?  I believe that there is.

The better way is to not test the UI.  At least, don't test it until very late.  Instead, test the functionality behind the UI.  In truth, we usually don't test the UI to make sure the button works, we test the UI to make sure the functionality represented by the button works.  For instance, in a word processor, we don't click the bold button to make sure that the button works so much as to check that text is actually turned bold when such an action is required.  It would be better if we could test the ability to make text bold without having to worry about navigating through the UI buttons to make it happen.  The MakeTextBold function is not likely to change its interface very often whereas the button is prone to changing shape, moving around, becoming a menu item, etc.  If we were testing MakeTextBold instead of the bold button, we wouldn't have to patch our test code every time the designers got a new idea.

This is all well and good but it doesn't work for most software products.  The trouble is the way that they are written.  UI toolkits encourage developers to write the code invoked by the button click event right in the button click handler.  The code might look something like this:

BoldButtonHandler(Context context)


    Region selectedText = Framework.GetSelectedRegion();

    selectedText.Font = selectedText.Font.Bold();


In reality it would be a lot more complicated, but I think you can get the general idea from this.  The difficulty with this code is that the only way to invoke it is to invoke a button click.  That means finding the right button in the UI and clicking on it.

We can, however, restructure our code.  Instead of doing the work implied by the button click in the button handler, we can make the button handler a very thin layer that just calls another function to do the work.  The advantage is that we decouple the button clicking from the work it implies.  We can then test the work unit without worrying about where the buttons are in the UI.  The new code might look like this:

BoldButtonHandler(Context context)






    Region selectedText = Framework.GetSelectedRegion();

    selectedText.Font = selectedText.Font.Bold();


This requires either a lot of rewriting or, better still, starting from the beginning of the project with the right patterns.  One advantage is that testing your application becomes resilient to changes in the UI.  Another advantage is that you can now write unit tests for the UI-based functionality.  Should you desire to expose your application's functionality for scripted automation, this also becomes a lot easier.  It's definitely worth the extra effort to go this route.

What about testing the actual UI?  Surely we still need to make sure that the bold button actually makes text bold.  We do.  But we can wait until really late in the process after the UI is stabilized to do this.  We know all of the functionality works correctly and so we can wait to make sure the buttons works.  This split-approach to testing the UI also provides the benefit that when something breaks in the UI test, you know it is in the UI code and not in the underlying functional code.

Do you have additional techniques or methods to test UI?  If so, I'd love to hear them.

Wednesday, July 25, 2007

Clarity: The Most Important Management Deliverable?

During my week-long management training class, I observed something worth sharing.  One of the most important things a manager needs to provide to his (or her) team is clarity.  It is important that you give precise instructions.  If asked for details, it is important to give them.  If you don't know enough details for the work to start, consider waiting to give the instructions. 

Let me set the stage a little.  During the training we divided up into groups with people acting as managers, leads, and workers.  When I was a lead, my manager came to us with a project to work on.  He had some idea what he wanted but didn't know all the details.  When we probed him for those details he said he didn't have them.  We started working on our vague mission which later changed as he got a better idea of what he wanted.  The pain of changing deliverables was high.  If you are pretty sure the work will change dramatically, don't assign it.

When put in a similar situation where I had vague instructions from above and had to convey them to my team, I took a different approach.  Rather than say, "I don't know," I made an executive decision.  I knew what parts were most likely not critical in my manager's mind and just made a decision.  This allowed my team to start work and to be comfortable with what I was asking of them.

Another example, also from the same training.  At one point we were having a brainstorming session.  Those of us who were workers asked whether we were trying to come up with solutions to problem A or problem B.  The management said to just do both.  We could sort them out later.  This didn't work well.  There was no agreement about what it was we were deciding and so there was no criteria to decide if something was appropriate or not.  The whole thing was quite frustrating.

These two instances--assigning work and running a meeting--are places where providing clarity is extremely important.  Without clarity, your team will feel frustrated and you will be perceived as lacking confidence or worse, being a flip-flopper.  If your team is convinced that their mission will change soon, they won't give it their all.  They'll wait for the change.  Likewise, if you don't have clear goals in a meeting, it will just drag on and on and there is a good chance you'll leave without having made a decision.

Here is some advice for both situations.

When assigning work, be precise and be decisive.  You have to know your audience.  If you are talking to junior team members, you'll need to give a lot of guidance.  You'll likely need to circle back several times and bring them back on course.  If they are doing the right thing, be sure to let them know this explicitly.  Tacit approval is very disconcerting--especially to younger team members.  If you are talking to senior team members, they won't need much direction.  They are capable of filling in the blanks themselves.  Be sure to let them know you are leaving the rest of the decisions up to them though.  If not, they may worry that you'll circle back later and undercut their decisions.

When running a meeting where you want to make a decision, you need to be clear what the goal is and how you'll decide if you got there.  To do this, you need to know the answers to these two questions before you start the discussion.  First, "What precisely is it we're trying to decide?"  It should be one thing.  If it is two, divide that up into two clearly delineated discussions.  Second, "What is the criteria for deciding the answer?"  It isn't enough to know what you want to decide.  It is equally important to know how you'll recognize the decision.  Share both of these pieces of information with the team.  Finally, during the meeting, you need to be in control.  That means you need to recognize when the team is going off on a rabbit-trail and ring them back.  You need to observe when consensus is forming and help it along.

In my mind, more than any technical expertise, more than growing your team, more than putting the right process in place, the most important part of being a good leader is providing your team with a clear vision of where they are going and why.  If they have this vision, they'll make the right choices.  If they don't, you'll have low morale and incoherent decisions being made.  This vision doesn't have to be imposed by you.  It can be influenced by the team.  You are responsible for being a clear advocate for direction, not necessarily for deciding it.

Monday, July 23, 2007

Too Many Questions

Or: Guidelines for dealing with a newbie.

If you are an experienced programmer or a manager, the chances are that you've had to deal with a new hire (or intern) who asks lots of questions.  No matter who you are, you likely were that annoying newbie at some point in your career.  Being inexperienced and new means the person doesn't the answers.  More than that, they don't know how to find the answers.  For this person, the easiest way to get answers is to ask someone.  So they asks lots of questions.  At first, it's fun answering the questions.  However, after a while, it gets tiring.  It can feel as if you're doing all of their work for them.  If you don't watch it, you might be.  If you find your time being monopolized by someone asking for too much help, here are a few suggestions.

  • Let them know that asking questions is the right thing to do - Before you try to curtail their questions, make sure they know that you aren't trying to quash them altogether.  It is important that they not get too frustrated or spend way too long on a problem.
  • Set up office hours - Just like a professor in college, consider setting up specific times when this person can come ask you questions.  This will encourage them to batch up their questions and spend time trying to solve the issues on their own.  This tactic also ensures that you have large blocks of uninterrupted time to get your own work done.
  • Set minimum research times - Set up a minimum amount of time this person should spend investigating on their own before they come to you with a question.  My suggestion is on the order of two hours. 
  • Give them a list of actions to take before coming to you - Provide a list of questions this person should answer or tasks they should complete before coming to you with a question.  Give them a checklist to facilitate their ability to investigate on their own.
  • Have them repeat the answers back to you - If you find someone is asking the same questions, this probably indicates that they didn't understand the answers you gave.  In this case, have them explain the answer back to you before they leave your office.


If you happen to be the new hire, know that your questions can be seen as disruptive if they are too frequent.  Try to impose the recommendations above on yourself.

Friday, July 20, 2007

Keep Your BVTs Clean

At Microsoft we build each of our products on a daily basis.  After each successful build, we run a series of automated tests we tend to call BVTs (Build Verification Tests).  If the BVTs fail, no one further testing is done and developers are called in to fix the issue ASAP.  The idea is simple but trying to actually implement it can reveal unexpected complexities.  One point that is often not considered is what tests to put in the BVT. 

It is sometimes tempting to put all of your automated tests into a BVT.  If they don't take too long to run, why not?  It is important to have only critical tests in your BVT suite.  The reason is that BVTs are supposed to be like the coal-miner's canary.  If they fall over, there is danger ahead.  Drop everything and fix the issue.  If you put all of your automated tests into the BVTs, you'll have lots of non-critical failures.  You'll have something more akin to a Tennessee fainting goat than a canary.  It will fall over often but you'll quickly learn to ignore it.  If you see a failure and say "That's okay, we can continue on anyway," that test shouldn't be in the BVT.  The last thing you want is to become numb to failure.  Put only those tests into your BVT that indicate critical failures in the system.  Everything else should be in a separate test pass run after the BVTs pass.

It is imperative to keep your BVTs clean. By that, I mean that the expected behavior should be for every test to pass.  It is not okay to have a certain number of known failures.  Why?  Because there is no clear indication of a critical failure.  "I can't recall, do we usually have 34 or 35 failures?"  There are two things to consider in keeping the BVTs clean.  First, are the tests stable?  Second, are the features the tests cover complete?  If the answer to either of these is no, they shouldn't be in the BVTs.

When I say tests should be stable, I mean that their outcome is deterministic and that it is always a pass unless something actionable goes wrong.  Instability in tests can come from poorly written tests or poorly implemented features.  If the tests behave in a seemingly nondeterministic manner, they shouldn't be in your BVT.  You'll be constantly investigating false failures.  Fix the tests before enabling them.  If a feature is flaky, you shouldn't be testing it in the BVT.  It is not stable enough for the project to be relying on it.  File bugs and make sure that developers get on the issues.

BVT tests should only cover aspects of features that are complete.  It is tempting to write tests to the spec and then check them all in even before the feature is spec compliant.  This causes a situation where there are known failures.  As above, this is a sure way to overlook the important failures.  Instead, you should only enable tests after they are passing and where you don't expect that behavior to regress.  If the feature is still in a state of constant flux, it shouldn't be in the BVTs.  You'll end up with expected failures.  BVT tests should reflect what *is* working in the system, not what *should be* working.

Thursday, July 19, 2007

Hofstadter's Law

Good advice for all project managers.  Hofstadter's Law:

It always takes longer than you expect, even when you take into account Hofstadter's Law.

Wednesday, July 18, 2007

Don't Build for the Future

Toward the end of Dreaming in Code, there is this quote from Mitch Kapor:

We've constantly over-invested in infrastructure and design, the fruits of which won't be realized in the next development cycle or even two--that is, not in the next six or twelve months.  You pay a price for that in a loss of agility.  The advice I would give is to do even more of what we've been doing in the last couple of years, which is to sequence the innovations, stage things, and be less ambitious.  Do not build infrastructure...except insofar as you need to meet the goals of the next year.  I'm more and more feeling like the art here is to do agile development without losing the long-term vision.

He's basically mirroring Linus Torvalds' advice I wrote about earlier.  If you take into account too much of your future plan, you'll failure to deliver the present version.  It's better to write flexible code that can be refactored in the future than to write code which has the future built in already.  If you have future ambitions for your program, there are good reasons you aren't planning it all for the current release.  You've decided that it is too much to get done in the timeframe allotted.  If that's the case, trying to do work that makes it possible means pulling much of the complexity from version 2 back into version 1.  That can only make version 1 more complicated and increase the time it takes to write.  It also deprives you of the ability to incorporate your experience into the design of those portions.  It's better to leave out the complexity and have more experience when you do implement them later for version 2.

Following proper design techniques can help a lot here.  If you following the major tenets of object oriented design, your code will have the required flexibility to make the changes you want for the second version.  If it is not, you can always refactor the flexibility in.  If you build that complexity now, you delay your project and probably got it wrong anyway.  The Chandler team spent a lot of time implementing functionality which was later replaced as they better understood the problem domain.

In the Innovator's Dilemma, Clayton Christensen gave some advice to startups (whether in big companies or working on their own).  He told them to always save enough funding for a second release.  His reason is that you never really know your market until you are actually selling a product into it.  I think the same holds true for software.  You never really understand what your program is supposed to be until you've written the first version.  Only after you get your product into the wild will you come to understand what it should have been.  This new understanding will inform your decisions for version 2 of your project.  Often, this new understanding will be radically different from what you originally intended.  Delaying investments is the best way to ensure that your energy is focused in the right places and that your implementation is best suited to achieving the updated vision.

Tuesday, July 17, 2007

Using a Sunrocket Gizmo with ViaTalk

If you happen to have found this page as a former Sunrocket customer, here is a way you can get a phone up and running quickly.  One potential VOIP provider you can use is Viatalk.  This is the one I chose.  Sign up for BYOD service.  Wait unil you get your e-mail from ViaTalk with your login information.  It took me less than a day to get mine even without paying for rush processing. I have an Innomedia gizmo.  If you have one of the others, these instructions won't work for you.  Here are the instructions I followed to get my gizmo to work with ViaTalk.

  1. Get the admin password for your Gizmo.  The best way to do this is to browse the forums at SunrocketForum or DSLReports.
  2. If you have an Innomedia Gizmo, the login page is  It is case sensitive.
    1. The login name is admin
    2. The password changes based on when Sunrocket last provisioned your system.
  3. Once you log in, go to IP Network->Provisioning Setting. 
  4. Turn it off and reboot.
  5. Login again
  6. Go to Management->Administrator
  7. Change the password to something else.
  8. Update.
  9. Go to VoIP->SIP Proxy. 
  10. Change to the server ViaTalk sent you.  For me this is  Yours may be different.
  11. Change SIP Domain to
  12. Set the first Codec to PCMU/8000 and all the others to None. Note that VT says to use G711U (ulaw), but that wasn't an option in my gizmo. PCMU works.  I don't know the details of this.  There may be better options.
  13. Save settings.
  14. Go to VoIP->User Account
  15. Enter the login name ViaTalk sent you into the User ID, User Name, and Authentication ID fields.
  16. Enter the password ViaTalk sent you into the two password fields.
  17. Save settings.
  18. Go to Management->Reboot.
  19. You should be on Viatalk.

These instructions were taken largely from this thread by Skruffy on SunrocketForum.

If you know of similar instructions for the Linksys or AC-211, please post links in the comments.

Dreaming In Code

I finally finished Dreaming in Code by Scott Rosenberg.  It was initially hailed as the Soul of a New Machine for a new generation.  As such, it fails.  Its depiction of the process and the characters involved is just not that compelling.  It's not poorly written, it just isn't outstanding.  It is, however, an interesting look into the realm of software process theory.

Scott was given inside access to the team creating Chandler.  Chandler is Mitch Kapor's project to create a new e-mail/calendar application.  Something akin to Outlook but much more flexible.  Scott tells us about the formative stages, the constant changes in direction, the endless meetings.  Some interesting characters like Andy Hertzfeld were part of the team.  As a description of a software project, it is palatable, but not exciting. 

We're given a view of what can only be described as a failure.  Chandler may become a success eventually, but it's taken over 4 years and is still not ready for prime-time.  It is this failure that provides the interesting part of the book.  Many software projects run aground, but most do so behind closed doors.  It is rare to have a chance to observe a failure and analyze what happened.  Perhaps this opportunity will give us some insights into why things failed which can be applied to avoid failures in our own projects.  I've posted elsewhere with my ideas on this.

Scott seems to have decided that a description of a failed software process was only moderately interesting and gives us an overview of much of the modern theory of software project management.  He references the writings of Fred Brooks, Alan Kay, Joel Spolsky, and many others.  These discussions are interspersed throughout the text and make up the bulk of the last third of the book.  In my opinion, the book is worth reading just for this content.  It's a great introduction to the subject and would make a good jumping-off point for more detailed research.

Overall, I recommend reading this book if software theory is something you find interesting.  If you are looking for a history book telling the story of a small team creating something amazing, stick to the original classic.

Monday, July 16, 2007

Bye Bye Sunrocket

Sunrocket was my phone company...until today.  This evening I looked at TechMeme to see what was new in the world only to run into this post at GigaOm.  Check the web site.  Everything looks fine.  Check the phone.  No dial tone.  Bummer.  Look around a little more and find this article in the New York Times.  My phone company just died.  No warning.  No automatic switchover to another company.  Just up and gone in an instant.  We have cell phones so we aren't fully out of contact, but how weird.  After so many years of dealing with a government-granted monopoly like Qwest, it's strange to think about a phone company just going away.

I can't stand paying $35/month for basic phone service though so I'll try another VOIP provider.  After doing some research, I think I'm going to try ViaTalk.  They get good reviews and have been around a while.  Hopefully I can actually use all of my year-long contract this time.  :)

7-17 update:  If you are following me to ViaTalk, here are some instructions on using your existing gizmo.

Back From the Cruise

My family and I went on an Alaskan cruise last week.  It was my first cruise.  I liked it.  There is a lot you can do on a ship during a cruise.  There is a casino, Broadway-style shows, art auctions, night clubs, various games (like trivia), and much more.  I didn't do most of it, yet I wasn't bored.  Just sitting on the balcony reading a book and watching the islands pass by was quite enjoyable.  The food was amazing.  Every evening is a five-course meal.  The staff is amazingly helpful.  Alaska is has a beautiful coastline (when the fog isn't in the way).  Our cruise went to the Inside Passage, Juneau, Skagway, and Ketchican.  I liked Skagway the best even though it is the smallest.  It is an interesting town to explore.

I can now see why there are so many special interest cruises.  There is a lot of time hanging around--especially in the evenings--that it would be really cool to walk by a lounge and be able to join in a conversation on something of interest to you. 

If you've never taken a cruise, consider it.  It's definitely worth doing at least once.

Sunday, July 15, 2007

What To Test When You Can't Test It All

A naive approach to testing is to just cover everything.  I once saw a testing expert claim something like: "We figured out that all bugs occur in state transitions, so if we just test all of the state transitions, we'll find all of the bugs."  Fascinating, but not very useful.  Simple statistics says that testing all of the state transitions for even a simple program becomes intractable quickly.  Complexity theory also states fairly authoritatively that testing all branches is not possible in a reasonable period of time.  Indeed, exhaustive testing is an NP-Complete problem.  So, what are we, as testers, to do?  Cover as much as possible and hope that nothing goes bad in the other places?  That's usually the approach.  There is another method though.  Based on some studies, it has become accepted that the vast majority of bugs happen at the intersection of two variables.  Thus, we can get thorough-enough testing by testing each of these pairs.  Keith Vanden Eynden has a good article discussing the idea and its implementation.

Wednesday, July 11, 2007

Are Design Patterns A Bad Idea?

Jeff Atwood has some issues with the idea of Design Patterns.  His issues are basically:

  1. People use design patterns when they could use a simpler solution.

  2. Recurring patterns indicate a place where the language is weak.

Read his post for the details.  I think Jeff makes some good points but misses the mark.  First, his comment about it being a language issue is interesting, but more often than not we don't have a choice of languages.  Not just because our bosses dictate the language, but because most languages have serious weaknesses and even if they solve our pattern problem, they may not be robust enough, have the right libraries, be what the team knows, etc.  Second, he's attacking the concept when it's really an implementation of the concept that deserves his criticism.  His point about using patterns when people could write simpler code is correct, but isn't really the fault of the book or the concept.  It's a problem with the way people use them.  Design patterns are very useful when we study how they work so we can create similar patterns.  They are bad when we try to copy them directly.  If one reads the Gang of Four, he will realize that the authors often give several examples of each pattern and they're all slightly different.  One might also notice that there is a lot of talk about the OO concepts that lead to the patterns.

I've always been skeptical of using the patterns as a shared language.  They shouldn't be treated like a classification system.  They should be used as examples of good design principles to apply where appropriate to other code.  If you can understand why the patterns are useful, you'll be able to create them without memorizing them.  You'll have code that is flexible, but not overly complex.

Every programmer should read the Design Patterns book.  They should do so to learn the rules of good design though, not as a reference work.

Monday, July 9, 2007

Don't Change Code Unless It's Broken

This came up recently on my team.  When making changes to pre-existing code, it is tempting to bring the code in line with our personal tastes.  If we like Allman style braces, we'll change the code away from K&R bracing.  If we don't like Hungarian, we will be tempted to change the variable names by removing the annotations.  There are many more styles I could mention here.  When faced with the temptation to change this code, resist.  Unless you are working by a coding standard that clearly calls out your style, don't change things.  Even if your coding conventions do mention a style, consider ignoring the infraction.  There are two reasons for this.

First, every change in the code is a chance to break something.  The code you are working with is known to be working.  Well, mostly.  There is some reason you are in there and it might be to fix a bug.  Even so, most of the code is known to be working.  If you change it--even for something small like bracing styles--you risk breaking something.  Making a change that subtly changes the code flow.  Unit tests will help reduce the risk here but unit tests don't cover everything.  Remember, we write code to deliver features to end users.  Why risk introducing a bug unless you are adding something of value to that end user?

Second, it is just a bad policy to change the code.  It makes code reviewing harder because it's hard to see the real changes from the purely cosmetic ones.  Even if you check in the changes in a different checkin, someone trying to following the history of the code will still come across your changes.  Worse, what happens when the next person to touch the code prefers K&R bracing and changes it back?  In theory, the code could flip-flop every other checkin.

The moral of the story:  Don't make cosmetic code changes.  Change code only because a) it is broken or b) changing it allows you better implement a new feature.

Friday, July 6, 2007

Cruising To Alaska

I'm headed off on a cruise to Alaska for about a week.  Don't expect to see much blogging going on during that time.  I have been having a lot of inspirations for post topics lately (can you tell?) so expect some good stuff when I return.

Thursday, July 5, 2007

We Owe It All To Smalltalk

The more I learn about Smalltalk, the more I realize what a foundational language it was.  I'm not convinced it it a language that really deserves widespread use today.  Newer languages have improved upon most of its features in one way or another.  However, a lot of modern programming originated with Smalltalk.  Here's a sampling:

  • The GUI was first popularized on the Xerox Alto which, if I'm not mistaken, was programmed primarily in Smalltalk.
  • The Model-View-Controller UI design model came from Smalltalk
  • Most of our Object Oriented design ideas came from the world of Smalltalk.
  • Unit testing originated in Smalltalk with Kent Beck's Testing Framework.
  • Extreme Programming comes out of the world of Smalltalk programmers.
  • Refactoring came out of the Smalltalk community.
  • Responsibility Driven Development (like CRC cards) came out of Smalltalk too.
  • Design Patterns originated in the Smalltalk community.

Some of this is the language, a lot of it is the community.  Things like XP, Unit Testing, Refactoring, and CRC cards are all obviously community creations.  However, even some of what we usually consider language features come from the community.  In Smalltalk a huge amount of what is enforced by languages today is done through convention.  Smalltalk has objects and inheritance but there is no sense of an interface in the language.  There is no support for abstract classes.  Yet most of the design patterns originated in the world of Smalltalk.

Why is this?  I'm not certain.  At the time, all other languages (other than Simula) were procedural.  Smalltalk was the only one that was object oriented.  The world didn't really tune in to objects until at least a decade after Smalltalk took off.  It's long been said that a programming language influences the way you think.  Perhaps that is what happened.  Maybe in order to come up with extreme programming, you need to be freed from the ways of procedural language.

Why do you think all of these inventions came from this relatively small community?  I'd love to hear your thoughts.

Yak Shaving

(Last post inspired by Dreaming in code...I think)

There is a problem in building software.  We don't know how to estimate how long it will take to build something.  A 2-week project takes 4.  A one month project takes six.  Why is that?  There are a lot of reasons and I won't attempt to cover them all here.  There is one though that I find happens often and which is probably avoidable if we know to look for it.  The concept is sometimes called yak shaving and other times axe sharpening.  It is Zeno's paradox of Achilles and the Tortoise.

Yak shaving as defined by Eric Raymond is "Any seemingly pointless activity which is actually necessary to solve a problem which solves a problem which, several levels of recursion later, solves the real problem you're working on."  That is, we're so busy solving the pre-problems that we fail to solve the real problem.  This is often the work of someone playing the role of the second stonecutter.

James Gosling called the same phenomenon axe shaving.  This takes its name from Abraham Lincoln's quote, "Give me six hours to chop down a tree and I will spend the first four sharpening the axe."  The idea is that it makes more sense to do the job with a sharp axe than to try to do it with a dull one.  It is better to spend 1/2 hour writing a perl script to make changes for you than 4 hours doing a manual search and replace.  The obvious downside is that people will focus so much on sharpening that they forget to stop and actually chop down the tree.  Gosling says there is a darker side too.

But for me, the big problem with "axe sharpening" is that it's recursive, in a Xeno's paradox kinda way: You spend the first two thirds of the time allotted to accomplishing a task actually working on the tool. But working on the tool is itself a task that involves tools: to sharpen the axe, you need a sharpening stone. So you spend two-thirds of the sharpening time coming up with a good sharpening stone. But before you can do that you need to spend time finding the right stone. And before you can do that you need to go to the north coast of Baffin Island where you've heard the best stones for sharpening come from. But to get there, you need to build a dog sled....

Xeno's paradox is resolved because while it is an infinite sum, it's an infinite sum of exponentially decaying values. No such luck in this case. The tool building jobs just get bigger, and you pretty quickly can lose sight of what it was you started to do.

Creating a tool to create a tool to create a tool is an infinite recursive sequence and will never end.  It's often worth creating the first tool and sometimes the second but the returns quickly diminish even if the time taken to write the tools does not.  It's important to keep in mind the ultimate goal.  If writing the tools takes longer than doing the work the slow way, it's best not to write the tool.  Of course, if we can't estimate how long it will take, it's hard to know when we've crossed the line.

Sometimes the yak shaving happens differently.  Rather than writing a tool to write a tool, we begin writing a piece of the system but decide that what we've written is awful close to a more general purpose tool.  It would be a shame to throw away what we've written by including it only in this one place so we extract it into a library.  Of course, a library takes longer to write than a single function method.  Each time we do this, the original program takes longer and longer to write.

The moral of the story:  be careful every time you add work to a project.  Each time you work on something that is only tangental to the original goal, consider the cost.  There is probably room in the schedule for a few, but it's easy to add too many.  When you do, you'll look back on your overdue project and wonder where the time went.

Wednesday, July 4, 2007

The Three Stonecutters

Lots of interesting quotes in Dreaming in Code.  This one is the story of three stonecutters.  Each is asked what he is doing.  The first answers that he is, "making a living wage."  The second says, "I am doing the best job of cutting stones in the entire country."  The third, "I am building a cathedral."  Each of the three represent employees you are likely to run across in your days as a manager.

The first represents the employee that's merely putting in his time.  He's the person who works 9-5.  He'll do what is asked, but when the job calls for that extra effort, he'll probably stop short.  There is a place for these employees in an organization.  As long as you can lay out achievable goals and set their direction, they'll serve you well.  Don't give them the highly critical piece though.  They might not come through if it takes a lot of extra effort or extra thought to deliver.

The second often represents a problem.  It's good that they care about the quality of their work, but they have their perspective wrong.  As a manager, you have a particular task at hand.  That task requires certain elements to be created.  Each of those elements needs to be of at least a particular quality.  However, being highly above that quality doesn't really help.  Take the example of the person building a cathedral.  The foundation stones need to be cut and they need to be straight.  However, being perfectly smooth is probably not required.  If it takes twice as long to make a perfect stone as it does an acceptable stone, that extra time and effort is wasted.  Likewise, I've had programmers deliver way more than is required.  They're often quite proud of it.  I once had someone deliver a test months overdue.  He was excited because it delivered all of these extra options beyond what I had asked.  It had been a lot of hard work getting it to all work right.  Unfortunately, I didn't have a need for most of that extra functionality.  It was wasted.  Watch for these types.  They don't have the right priorities.  Programming is always done to accomplish a task.  If programming is seen as the task, the project will suffer. 

At least some of the problems Chandler faced come from this issue.  The repository, the widget model, the UI all became issues where the perfection of the parts was seen as more important than the overall product.  So much time was spent getting the small things "right" that the large things were ignored.  What makes or breaks a piece of software is often the integration of the parts.  This second stonecutter worries about his stone, not the stones around him.

The third stonecutter is the one you want to maximize on your team.  This is the person who sees the big picture.  He not only knows what has been asked of him but also why.  Because he understands why, he is able to make intelligent decisions.  Knowing that your stone is going into a cathedral means you know when you need to cut the stones to one level of precision over another.  This is the sort of employee you just point in a direction and then stand out of the way.  They'll knock down whatever walls get in their way.  You won't even have to ask.

Tuesday, July 3, 2007

Failure by Committee

I'm reading Dreaming in Code and it's occurring to me at least one of the reasons that Chandler failed.  Chandler, if you don't know, is the Personal Information Manager application that is the subject of the book.  In my mind, Chandler failed because they didn't know how to make decisions.  Mitch Kapor tried to run the team in a democratic way.  Everything was decided by committee.  In my mind, he ran it so democratically that it had insufficient leadership.  For all decisions the book reads as if the team tried to create consensus.  Sometimes it is better to just pick a direction and ask people to follow.

This is basically why Andy Hertzfeld left the team.  He said, "To make a great program, there's got to be at least one person at the center who is breathing life into it.  In a ferocious way.  And that was lacking."  Think about Steve Jobs' role on the Macintosh team.  He drove the project forward.  He generated consensus by leading, not by hoping it would magically materialize just because a group of people was talking.  Instead, Kapor gets the group in a room, lets them discuss for a while, and then leave without really coming to a conclusion.  Decisions, once made, were often second-guessed by the next group of people as they tried to re-establish a new consensus.

An example is the design for the backing store.  There was an argument about whether to use ZODB or a traditional database or a resource description framework based on the concept of triples.  Any of them would probably have worked to one degree or another.  Yet, instead of picking one and going with it, they spent literally two years making that decision (Spring 2001 to June 2003 as I read it).  In the mean time, every other part of the project could only make marginal progress because everything relied on this underlying piece.

Joel Spolsky said he thought the program failed because it didn't have a central vision.  I think I'm saying something similar but from a different angle.  A solid vision is a decision.  By itself, it isn't enough, but it's necessary to start.  Chandler didn't have that vision.  It wasn't to be "revolutionary" but what does that look like?  No one defined revolutionary.  That meant that the project kept changing.  One of the early ideas was an address book but when Andy Hertzfeld left, the idea had been virtually dropped.  This despite the fact that he's written one.

What should we take away from this?  A successful project must know how to make decisions.  To do that, it helps to have a vision.  Vision doesn't magically materialize from a group talking together.  Instead, it is driven by one individual or perhaps a handful of individuals.  Those individuals don't have to dictate the vision.  Instead, they can cultivate it.  Cultivating requires encouraging those things that support the visions and helping to weed out those that don't.  Without direction, a group of people is like a garden without anyone to weed it.  The good ideas will be overrun by the bad and in the end it won't be very productive.

Talking is best when the outcome is a decision.  A decision is most likely if there is a shared criteria by which to judge outcomes.  The vision can be that criteria.  Without some bar to hold decisions against, they can always be second-guessed.  "Is this repository good enough for the project?  We don't know, but this one over here is better for some things."

Monday, July 2, 2007

1337 H4x0rz Use Media Center Keyboards

I saw Live Free or Die Hard yesterday.  It's a story that involves, among other things, computer hackers.  Kevin Smith makes an appearance in the movie as an elite hacker.  It was interesting to see that his preferred input device was a Microsoft Media Center Remote Keyboard.  Does that mean his command center is running Windows?

If you're wondering, it was a good movie.  Much better than I expected and a lot of fun to watch.  If you enjoyed the first two Die Hard movies, be sure to catch this one.

Sunday, July 1, 2007

Avoid 3-Card Combinations

I used to play collectible card games.  I attended Whitman College during Richard Garfield's tenure there as a math professor so I got into Magic: The Gathering near its inception.  For those of you who don't play these, the basic system goes something like this.  You buy packs of cards--not unlike baseball cards--which you trade and assemble into decks.  During gameplay, you draw cards from your deck and place them into play.  The rules for playing them differ from game to game but they almost always involve text on the cards that explains their behavior.  The abilities granted by each card vary greatly and they are most powerful in the way they interact with other cards. 

Sometimes you can find combinations which are nearly unbeatable.  If the game is well designed, such killer combinations usually require get 3 (or more) of the exact right cards.  Most of the time you can only have a limited number of any one card in a deck.  Economics often precludes it even if the rules don't.  Thus the chance that you will have the 3 cards you need in your hand (or in play) at the same time is very low.  When we were playing, we had a saying that went something like "don't build your deck around 3-card combinations."  While these killer combos were game-winners if they came out, they were so rare that you usually lost.  A much better strategy was to build a deck around simpler concepts that required only two cards at a time to pull off.

That's nice, but what does this have to do with computers and programming?  It is a good analogy for the way many people manage projects.  If you consider each dependency as a card you need to draw and shipping as the killer combination, it becomes obvious that the winning strategy is not to take on too many dependencies.

If you assume that the new framework or programming language will be mature enough and that your maintenance work won't take very long and that the team you're relying on will deliver their part on time, you've just built your deck around a 3-card combination.  Sure, it will be a spectacular product and take the market by storm when you ship it.  If you ship it.

The world of software development is full of math people.  You'd think we would pay more attention to the probabilities of success.  Somehow we think we are in a reality distortion bubble though.  Math doesn't apply to us.  Sure, it will be hard, but we're better than average.  Things will fall into play.  We'll live happily ever after.  Except, we don't, it does, we aren't, they won't, and we won't.

The moral of this tale:  Try to accomplish a little less and build on last year's framework.  Your work-life balance will thank you.  So will your marketing people.