Saturday, December 29, 2007

Just a Geek

I just finished reading Wil Wheaton's "Just a Geek."  It recounts his struggles after leaving Star Trek.  Today Wil Wheaton is a prominent Geek.  He has 3 books, a popular blog, and was the keynote speaker at PAX 2007.  However, for the 15 years between leaving Star Trek: The Next Generation and today, he really struggled.  This book is a look into his mind during that tumultuous period.  He was a has-been who couldn't get work as an actor.  He was a husband and a dad and couldn't provide enough money to pay all the bills.  He was struggling with who he was and with his decision to leave Star Trek before it was over.

The book is basically a collection of blog entries from and his previous site.  However, unlike some books that are mere collections of blog entries, there is a lot of additional context around the entries that you'll only find in this book. 

Wil holds nothing back in his descriptions of what he was going through.  He had it rough for a while.  His style and openness makes the reader care about him as a person.  This isn't a book to read to get all the dirt on his life.  Rather, it is a book to read to understand Wil Wheaton the man.  It is an inspiring read.  To see him overcome his doubts and fears.  To watch him brought to his knees and admit defeat only to renew himself for victory on a new front.  One cannot help but be inspired by his story.  I find myself looking forward to reading his other material.  I read his new blog, but only irregularly.  I got tired of reading about his poker games.  After reading this book, it will go back on my regularly read list now though.  It looks like poker is taking a lesser role once again.

Has anyone read his newest book, "The Happiest Days of Our Lives" yet?  Is it any good?

Friday, December 28, 2007

On the Edge

I started On the Edge:  The Spectacular Rise and Fall of Commodore this summer but had to put it on hold as I went back to class.  Now that class is done, I have a few weeks to read what I want and finishing this was my first order of business.

On the Edge tells the tale of the rise and fall of one of the great American computer companies:  Commodore.  You've seen me refer many times on this blog to stories about Commodore and the Amiga.  My first computer was a Commodore 128 and I spent most of high school and college on an Amiga.  While the C= 128 was fun, the Amiga was just amazing.  It was so far ahead, it took the PC nearly a decade to catch up.

This book recounts the brilliant engineering and terrible management that characterized Commodore throughout its history.  Apple had Steve Jobs.  Microsoft had Bill Gates.  Commodore never had that great leader.  It had leaders, but none of them were great.  They never understood the market.  It was Jack Tramiel's company during the Commodore 64 days.  He had something with the PET and the VIC-20 and the Commodore 64.  But to him it was never a computer.  It was just a commodity to sell and he failed to understand how to leverage his great hardware into something bigger.  He cut R&D funding.  He pushed for too many compromises on the altar of cost reduction. He fired nearly everyone who did the best work.  Chuck Peddle created the 6502 and the PET.  He was the leader of the engineering group at Commodore and had great vision for computers.  Tramiel saw him as a threat and fired him.

I didn't realize that Commodore had the sales or the opportunities it had.  Despite what we've been taught to believe, the Apple II didn't start off too well.  Commodore and Radio Shack both outsold it in substantial numbers.  The Commodore 64 not only used the 6502 processor, but Commodore created and owned it.  The same 6502 that was at the heard of the Atari and Apple computers and even the NES.  They had their destiny in their own hands.  They could have created a 16-bit version or at least made it faster.  Instead, they fired all the staff responsible for creating it and lost a great opportunity.

Around the time the Amiga was acquired, Tramiel himself was fired by Irving Gould, the financier of the company.  Gould wouldn't keep management in place long enough to let a real strategy be executed. He too felt threatened by those who were his biggest assets.

Brian Bagnall does an excellent job chronicling the years of Commodore.  The book seems to be based largely on the recollections of people like Chuck Peddle and Dave Haynie but includes a myriad of other sources.  The book follows the personalities rather than the events.  In this way, the reader comes to know these men and can feel for them as they are jerked around by management.

As someone who grew up on Commodore's machines and who faithfully read every Dave Haynie post on FidoNet for years, I found it painful to watch the company I knew and loved die.  It was painful when the Amiga died in 1994.  It was painful to relive them reading this book.  I enjoyed it though.  If you are one of those who drank the Amiga Kool-aid during it's decade-long run, grab this book.

The book is also insightful for those of us working in the technology industry today.  Commodore died not because it couldn't create competitive products, it died because it made bad decisions.  Bad non-technical decisions.  The moral of the story:  it doesn't matter how cool the technology or how good the engineers.  If a company has poor management, it will fail in the long run.  Something to consider before your next job interview.

Thursday, December 27, 2007

First EMI, Then Universal, Now Warner...

Apparently Warner Music just announced that they were releasing all of their tracks DRM-free.  That makes 3 of the big four giving the heave-ho to DRM.  Sony is now the lone holdout against the future.  How long until they give in to the inevitable?  Next up, the movie industry?  We can only hope.

Wednesday, December 26, 2007

Encapsulate What Varies

It took a lot longer than I expected but this is first installment of my Design Principles To Live By series:  Encapsulate Variation.  This is a quick tour through the principles behind the design patterns.  Following these allows will allow you to make the "right" choice in most situations.  As with everything, there are exceptions to the rules.  These are not set in stone.  Violating them in okay, but only if you understand why doing so is better than following them.

Encapsulate Variation

A test for good software design is how well it can deal with future change.  As the cliche truthfully claims, the only constant is change.  Inevitably any piece of software the is in use will be asked to change. Business needs will evolve or the problem space will be better understood, etc. Whatever the reason, the software will need to change.  A good design will allow for that change without too much work.  A bad design will be very hard to modify. 

While it is nearly impossible to predict the specifics of future requirements, it is much easier to understand what is likely to change.  When designing software, look for the portions most likely to change and prepare them for future expansion by shielding the rest of the program from that change.  Hide the potential variation behind an interface.  Then, when the implementation changes, software written to the interface doesn't need to change.  This is called encapsulating variation.

Let's look at an example.  Let's say you are writing a paint program.  For your first version, you choose to only handle .bmp files because there is no compression and they are easy to load.  You know that if the program becomes popular, you'll want to load other files like .jpg, .gif, .png, etc.  The naive way to implement the loading of a bmp is to write some functions that do just that.  They load the bitmap file into your internal version of it.  If you are using an api to load them, you might even be tempted to put the correct API calls directly in the Open button handler.  Doing either will make life harder later.  Every place that has to load the files (the button handler, the file recovery routine, the recently-used menu selections, etc.) will have to change when you add support for JPEGs and portable network graphics. 

A better solution would be to create an interface IImageLoader and inherit from it for BMPLoader.  Then all code handling loading files will call methods on IImageLoader and won't care (or know) about the specifics of the type of image being loaded.  Adding JPEGLoader and PNGLoader will require changing much less code.  If done right, changes will be isolated to just one place.

The point of this principle is to look ahead a little.  See what is likely to vary, and plan for it.  Don't plan for it by writing the handlers for JPEG.  Maybe HDPhoto will have taken over the world by then.  Rather, ensure that those things most likely to vary are encapsulated and therefore hidden from the rest of the program.

Tuesday, December 25, 2007

Let It Snow!

Here's my white Christmas:


Merry Christmas To All!

Merry Christmas everyone.  I hope you are all able to spend some good time with family and friends.  I'm off to see what Santa brought me.

[2 hours later]  We finished opening presents and it started to snow!  Not a little snow, but a lot of snow.  Large flakes fill the air.  A white Christmas in Washington.  Uncommon but very cool.

Wednesday, December 19, 2007

What Is Test Automation?

I talk about it a lot, but I don't know that I've ever defined it.  A reader recently wrote in and asked what exactly this was.  I suppose that means I should give a better explanation of it.

Long ago in a galaxy far, far away, testers were computer-savvy non-programmers.  Their job was to use the product before customers did.  In doing so, they could find the bugs, report them to the developers, and get them fixed.  This was a happy world but it couldn't last.  Eventually companies started shipping things called SDKs which were Software Development Kits full of programming primitives for use by other programmers.  There were no buttons to click.  No input boxes to type the number 1 and the letter Q into.  How was a non-programmer supposed to test these?  Also, as companies shipped larger and larger products and these products built upon previous products, the number of buttons that needed pushing and input boxes that needed input grew and grew.  Trying to test these was like running on a treadmill turned up to 11.  The cost of testing grew as did the amount of time developers had to wait for the testers to give the green light to ship a product.  Something had to be done.

The solution:  Test Automation.

Test automation is simply an automatic way of doing what testers were doing before.  Test automation is a series of programs which call APIs or push buttons and then programmatically determine whether the right action took place.

In a simple form, test automation is just unit tests.  Call an API, make sure you get the right return result or that no exception is thrown.  However, the real world requires much more complex testing than that.  A return result is insufficient to determine true success.  A function saying it succeeded just means it isn't aware that it failed.  That's a good first step, but it is sort of the check engine light not being lit in the car.  If there is an awful knocking sound coming from under the hood, it still isn't a good idea to drive.  Likewise, it is important to use more than just the return value to verify success.  Most functions have a purpose.  That may be to sort a list, populate a database, or display a picture on the screen.  Good test automation will independently verify that this purpose was fulfilled.

Other advanced forms of test automation include measuring performance, stressing the system by executing functionality under load, and what we call "end to end" testing.  While unit tests and API tests treat methods and modules as discrete pieces and test them in isolation, end to end testing tries to simulate the experience of the user.  This means pressing the buttons in Windows Media Player to cause a DVD to play and then verifying that it is playing.  Sometimes this can be the most challenging part of testing.

Here's an example of something we had to automate.  Think about how you might approach this.  Windows Vista offers per-application volume.  It is possible to turn down the volume on your World of Warcraft game while leaving Windows Media Player playing loud.  To do this, right-click on the speaker icon in the lower-right-hand corner of your screen and select "Open Volume Mixer."  Moving the slider for an application down should cause its volume to attenuate (get quieter).  Testing this manually is easy.  Just play a sound, lower the volume, and listen.  Now try automating it.

Tuesday, December 18, 2007

Welcome Matthew van Eerde to the Blogosphere

One of my team members, Matthew van Eerde, just joined the blog world.  Check out his inaugural post.

Monday, December 17, 2007

Vista SP1 Release Candidate Available to the Public

Vista SP1 RC1 has just been released for public consumption.  If you want to try it out, you can do so here.  I'm running this on most of my machines without incident.  This includes my Media Center at home.  So, from my few data points, it seems quite stable.  Don't expect any major changes in functionality.  This isn't XP Service Pack 2.  It's more like a traditional service pack.  Lots of bug fixes, but it's not being used to roll out big new features.  If you want to read about it, check out Paul Thurrott's review.  Remember, this is the release candidate, not the final build.  If you install this, you'll have to uninstall and upgrade to the real one whenever the final is released.

Thursday, December 6, 2007

Dynamic Range and Color Spaces

Bill Crow, best known for his work on HDPhoto/JPEG-XR, has a great post about dynamic range and color spaces.  If you are into photography or video, understanding this is important.  As we try to aggregate video from more and more sources onto varying display mediums, color science is becoming every more important.  Bill gives a great introduction to the subject.  If you want to know more, there is a great book called Digital Video and HDTV: Algorithms and Interfaces by Charles Poynton that covers this all in great depth.

Friday, November 30, 2007

Design Principles To Live By

Object-oriented design and design patterns can seem complex.  There are a lot of ideas and cases to consider.  However, there are a handful of principles that, if followed, will result in code that complies with most if not all of the patterns.  These are the patterns behind the patterns.  In my mind, there are 5.  In the next week or so, I'll be writing a post on each of these.

  1. Encapsulate Variation
  2. Design To Interfaces
  3. Loose Coupling Between Classes
  4. Classes Exhibit High Cohesion
  5. Prefer Composition Over Inheritance

  6. Don't Repeat Yourself [late addition]
I'm sure I've missed a couple.  If I have, let me know.

Thursday, November 29, 2007

Inbox Zero

If you're anything like me, you have way too much e-mail to read it all.  To try to cope with this, I've resorted to a collection of rules that sorts my mail into a Byzantine structure of folders.  This helps a little, but has the problem of helping me miss a lot of mail as well.  Things get neatly sorted into specific folders where they are summarily ignored for large periods of time.  I just ran across a talk by Merlin Mann discussing a concept he calls "Inbox Zero."  There are basically two main concepts:

  1. Don't let e-mail run your life.  Check it only periodically.
  2. When you do check it, take action on each piece of mail right away.  The key is the nature of the action taken.  In my world, taking action has always meant reading and responding to the mail.  This takes a long time and it's hard to get through the mailbox this way.  Instead, Merlin suggests doing one of the following, all of which are quick:
    1. Delete.  If the mail isn't important, delete it immediately.
    2. Archive.  If this isn't something you need now but might want later, move it to an archive folder.  He says to use just one.  That way you don't need to think about how to file it.  In Outlook 2007 (or earlier if you used desktop search or Lookout) and online mail programs, searching can solve the problem folders were intended to solve.  The advantage of only one is that you don't need to think about how to file it.
    3. Respond Quickly.  If you can answer in 1-5 sentences, just do it.  Then delete (or archive) the mail.
    4. Just Do It.  If action is required that can be done now, get up and do it.
    5. Flag For Follow-up.  If the mail requires more time, move it to a follow-up folder or mark in such a way that you know to get back to it.  This lets you move on.  Come back to this folder at the end of the day and clean it out.

That's all.  I'm going to give it a shot and see how it works.

Apparently this talk was based on a series of blog posts.

Sunday, November 25, 2007

Video Podcasts

With my new Zune, I've started watching some video netcasts.  Here are the ones I've found the most interesting so far:

  • Tekzilla - Feels a lot like old TechTV.  1/2 hour an episode talking about everything from routers to Black Friday sales.
  • The GigaOm Show - Om Malik interviews headliners and comments on the Web 2.0 world.
  • - Another technology news show.
  • Cranky Geeks - John Dvorak hosts a panel discussion about the latest trends in technology.  Think Twit but with video.  Not quite as good though.
  • Ask A Ninja - Short form comedy.  Hard to describe.  Just watch it.
  • Tiki Bar - Comedy show set around the adventures in a tiki bar.  Well written and high production value.

If I'm missing any good video casts, let me know.

Saturday, November 24, 2007

The New Zune Revue

Over the past few years I have become an avid podcast listener.  I've been using Creative MP3 players until this point.  I have owned a Zen Nano, Zen Stone Plus, and a Zen Vision M.  The first was good its 1 GB size became restricting.  The second was a good size, but the battery life was terrible.  It's supposed to be 10 hours but it felt more like I was getting 5-7.  Worse, it's not clear when you are about to run out of battery.  The Zen Vision M is nice, but too heavy for my liking.  The UI is also quite a mess.  It's very inconsistent and not always intuitive.  For various reasons, I've never owned an iPod.  I also skipped the first Zune due mostly to the form factor.  I want to carry my player in my pocket all the time.  A hard drive player is just too big for that to be comfortable for me.

About a month ago I started seeing and reading about the new second generation Zune.  I liked the firmware on the first one but the PC software was pretty bad.  The new one did two things that piqued my interest.  First, they came out with a flash model.  Second, they totally rewrote the PC software.  As an added bonus, there is finally podcasting support.  The new unit looked promising so on the first day it was available, I bought one of the black 8GB Zunes.  I was not disappointed.  The device is solid, the firmware great, and the PC software very well done.  It's not perfect, there are certainly things that could be better, but overall the experience is amazing.


When you unbox a new Zune, there is no CD included.  Instead, you have to go to the web to download the software.  The first thing I noticed is that there is a native 64-bit version of the Zune Player.  I run a 64-bit version of Vista on my main home machine so this was a welcome thing. 

The new player is elegant.  It is very streamlined.  This means that some functionality you might want is missing, but for the purpose of playing media and syncing to a Zune it is great.  In fact, I don't think I've loaded Windows Media Player since installing the Zune player.  It's that good.  It uses the same 2D control system that the Zune uses.  Visually, it looks great.  It makes iTunes look like a spreadsheet by comparison.

The support for podcasting is well done.  There is a list of hundreds of podcasts included in the marketplace.  You can search for them by keyword or browse by category.  Subscribing is as easy as hitting a subscribe button.  If the podcast you want isn't in the marketplace, you can just enter the url of the rss feed and it works the same.  It will then download the 3 most recent episodes and upload them one at a time to the Zune.  You are free to change these defaults if you want.  On the Zune, the podcasts show up in their own menu and are separated into audio and video podcasts.  When you finish listening to a podcast, it is automatically marked as watched and will be removed next time you sync.  In a really cool touch, each podcast has its own bookmark.  Unlike the Creative players I'm used to, I'm no longer hostage to the podcast.  It's easy to switch to music or even another podcast and come back to the same place later.

The screen on the Zune 4 and 8 GB models is a bit small, but it's fine for watching many shows.  I have watched video podcasts and TV shows transcoded from my Media Center and both are definitely watchable.  The screen is bright and the colors vivid.

Wireless sync is really cool.  Once you set up the Zune on your wireless lan, you can cause the Zune to sync any time you want.  It works well.  Look Ma!  No wires.


Unfortunately, not all is wonderful in Zune land.  The first thing I noticed is that I couldn't install the updated firmware.  I got an error 0xC00D11CD.  Searching the web I found that I'm definitely not the only one hitting this error.  Luckily, I also found a solution hidden in the forums.  If you see this error, try plugging in the cable directly into one of the motherboard mounted USB ports.  I get the error every time I'm plugged into the front jack but never in the back.  Your mileage may vary but hopefully this will help someone.

I really miss the ability to force a transcode before syncing.  The Zune can play audio at high bitrates and video at high resolutions.  If I'm not going to copy the file elsewhere, though, the extra resolution is wasted.  I want an easy way to down-res the audio and video files before syncing.  I can't find a way to do this presently.  This becomes a big problem when I'm trying to sync files from Media Center.  These are often 2-3 GB .dvr-ms files.  Because the Zune can play them directly, it doesn't transcode them.  Unfortunately, I don't want to waste 3 gigs of my 8 for one TV show.

While I'm at it, I'd like to see support for Divx and Audible formats.  Neither are on the device or in the software presently.


While there are definitely some things that could be better, this is a major leap forward from the first Zune.  The device UI is easy to use.  The new player is great.  The support is much broader than the first time out.  I haven't used an iPod extensively so I can't compare, but I don't feel like I'm missing anything with the Zune.  In fact, there are features like wireless sync I would miss if I did have an iPod.  If you are in the market for a new MP3 player this Christmas, give the Zune a good look.  You might be surprised at what you find.

Thursday, November 22, 2007

Happy Thanksgiving!

It is Thanksgiving today.  My wife and I will be having a small gathering of about nine family members.  I always enjoy getting some time to put work aside for a few days just hang out with family.  I hope you all have a great Thanksgiving today and that many of you get to spend time with family and friends.  I've often said that 80% of happiness is just deciding to be happy.  I encourage everyone to take the opportunity today to reflect on the good things in life.

Saturday, November 17, 2007

Resume Advice

Some resume advice from Steve Yegge.  I don't agree with all of it but it's good stuff to consider when writing your technical resume.

Friday, November 16, 2007

Phone Screen Questions

Steve Yegge from Amazon offers his Five Essential Phone Screen Questions.  It's an old post, but a good one.  His advice is solid.  It's always disappointing to bring in a promising candidate for an interview only to have them bomb.  It would be much better to screen them out early.  Steve give suggestions for what to ask (and not ask) to make a better determination up front.  Most of the advice is universal.  Some is specific to the type of work Steve is hiring for.  For example, he suggests that one of the five areas to probe is scripting and regex.  There are a lot of jobs out there that need this, but not all.  On my team (Audio Test Development in Windows), scripting and regex don't get a lot of use.  Instead, I would tend to ask about OS concepts and debugging.

Steve gives some other good advice.  First, he says that the interviewer needs to direct the interview.  That seems obvious but it is easy to focus on the areas the candidate wants to cover (the things on his/her resume) and forget the things that are missing.  Second, he gives a bar for the answers.  He says that you are looking for a well-rounded individual.  The acceptable candidate does not have to excel at all five points.  Rather, they cannot fall flat in any area.

I concur with most of Steve's advice.  My addition to his is to probe beyond the first-level answer.  I like to ask questions that go deeper until I find the point at which the candidate no longer knows the next level of answer.  It's easy to cram some high-level ideas.  Everyone can say that threads are more "lightweight" than processes, but what does this really mean?  Either the candidate really understands and has some solid second- (or third-) level answers or they don't really know the subject area.  Don't be afraid to ask follow-up questions until the candidate runs out of answers.  You'll get a much better sense for the candidates' strengths and weaknesses.  Of course, if you are going to do this, tell the candidate up front.  It's not fatal to not be able to explain how page tables can be used to map memory across processes.  You don't want them flustered when they mess up.

Wednesday, November 14, 2007

"Everyone" Is Not A Valid Owner

Saw this over on {Codesqueeze}.  He talks about the danger of self-organizing teams.  When people aren't given clear responsibilities, things get dropped.  If there is a task which belongs to everyone it will in the end be accomplished by no one.  Everyone who sends e-mail knows this.  If you want an answer to your mail, never send it to two people at once.  Send two individual mails.  I liked this quote:

Managers with strong knowledge but are weak leaders tend to run hippie communes.

This is often true.  There is a management style which tries to leave responsibility up in the air.  Weak leaders hope their teams will self-organize.  This just doesn't work most of the time.  If it does, it is usually because a strong leader who wasn't the manager took charge.

Tuesday, November 13, 2007

Analog to Digital Conversion

If you want digital audio in a computer, you have to get it from somewhere.  Usually that means taking analog sound out of the air and turning it into the bits that a computer can understand.  Ars Technica gives us another installment of the AudioFile. This one covers the subject of Analog to Digital Conversion.

Monday, November 12, 2007

Always Question the Process

Let me recount a story from the television show Babylon 5.  In one episode there is the description of guard posted in the middle of an empty courtyard.  There is nothing there to protect.  When one of the characters, Londo, questions why, he finds that no one, not even the emperor, knows why.  After doing some research, Londo discovers that 200 years before, the emperor's daughter came by the spot at the end of winter.  The first flower of the spring was poking up through the snow.  Not wanting anyone to step on the flower, she posted a guard there.  She then forgot about the flower, the guard, and never countermanded her order.  Now, 200 years later, there was still a guard posted but with nothing to protect.  There had been nothing to protect for 200 years.

This demonstrates the unfortunate power of process.  It often takes on a life of its own.  Those creating the complex system of rules expect it to be followed.  Once written down though, people stop thinking about why it was done.  Instead, they only expect it to be carried out.  This often leads to situations where work is being done for the sake of process instead of the outcome.  It is from this situation that bureaucracy gets its sullied reputation (well, that and the seeming ineptitude of many bureaucrats).  Process can easily become inflexible.  This is especially true in the technology industry where process is embedded in the code of intranet sites and InfoPath forms.

I encourage you to constantly revisit your process.  Question it.  Why do you do things the way you do?  Is there still a reason for each step?  If you don't know, jettison that step.  Simplify.  You should have just enough process to get the job done, but no more.  Once again, I'll re-iterate.  You hire smart people.  You pay them to think.  Let them.

This isn't to say that all process is bad.  Having common ways of accomplishing common tasks is efficient.  If a process truly makes things more efficient, it should be kept.  If not, it should be killed.  What was at one time efficient probably isn't any more.  Be vigilant.

Saturday, November 10, 2007

The Ultimate Geek Jacket

With Christmas approaching, here is a cool idea for the gadget-lover.  The ScottEVest Evolution Jacket is a waterproof jacket with 25 pockets for all the cellphones, Zunes, PDAs, pens, etc. that we tend to carry these days.  The jacket also has special ducting for headphones from the iPod/Zune pocket, space for books, water bottles, and magazines.  The sleeves come off to make it into a vest.  In short, it looks very cool.  If a coat isn't what you are looking for, they also have shirts, hats, and pants with special pockets for gear.

You can get 20% off if you go through TWIT.  Here is a cool video of the CEO showing this off on Donny Deutsch's show.

Friday, November 9, 2007

Keep Process Simple

Year ago one of our Software Test Engineers was tasked with documenting our smoke* process.  It should have been something simple like:

  1. Developer packages binaries for testing
  2. Developer places smoke request on web page
  3. Tester signs up for smoke on web page
  4. Tester runs appropriate tests
  5. Tester signs off on fix
  6. Developer checks in

Instead it turned into a ten page document.  Needless to say, I took one glance at the document and dismissed it as worthless.  As far as I know, no one ever followed the process as it was described in that document.  We all had a simple process like the six steps I laid out above and we continued to follow it.

When tasked with creating a process for a given task, the tendency is to make the process complex.  It's not always a conscious effort, but when you start taking into account every contingency, the flow chart gets big, the document gets long, and the process becomes complex.  To make matters worse, when a problem happens that the process didn't prevent, another new layer of process is added on top.  Without vigilance, all process becomes large and unwieldy.

The problem with a large, complex process is that it quickly becomes too big to keep in one's mind.  When a process is hard to remember, it isn't followed.  If it takes a flow chart to describe an everyday task, it is probably too big.  It is far better to have a simple, but imperfect process to one that is complete.  The simple process will be followed out faithfully.  The complete one will at best be simplified by those carrying it out.  Worse, it may be fully ignored.  It is deceptive to think that a complex process will avoid trouble.  More often than not, it will only provide the illusion of serenity.

I have discovered that process is necessary, but it must be simple.  It is best to keep it small enough that it is easily remembered.  The best way to do this is to document what is done, not what you want to be done.  People have usually worked out a good solution.  Document that.  Don't try to cover all the contingencies.  It will only hide the real work flow.  Documented process should exist to bring new people up to speed quickly.  It should be there to unify two disparate systems.  It should not be there to solve problems.  You hire smart people, let them do that.  Expect them to think.

My final advice on this topic is do not task people with creating process.  It is tempting to say to someone, "Go create a process for creating test collateral."  Tasking someone with creating process is a surefire way to choke the system on too much process.  The results won't be pretty.  Nor will they be followed. 


*Smoke testing is a process involving testing fixes before committing them to the mainline build.  Its purpose is to catch bugs before they affect most users.

Thursday, November 8, 2007

The Need for a Real Build Process

Jeff Atwood at Coding Horror has a good post about how "F5 is not a build process."  In it, he explains how you need a real centralized build process.  F5 (the "build and debug" shortcut key in Visual Studio) on a developer's machine is not a built process.  At Microsoft, we have a practice of regular, daily builds.  We use a centralized content management system which everyone checks their code into.  At least daily, a "build machine" syncs to the latest source code and builds it all.  There are three main advantages of this system. 

First, it makes sure that the code is buildable on a daily basis.  If someone checks in code which causes an error during the build process, it shows up quickly.  We call this "breaking the build" and it's not something you want to be caught doing.  If you break the wrong build, you can get a call early in the morning asking you to come in immediately and fix it.

Second, it ensures that there is always a build ready for testing.  This has the added benefit of providing one central spot for everyone to install from.  If the build process is just on individual developers' machines, it is not uncommon for different people to be testing a build from radically different sources and thus have conflicting views of the product.  If you find a bug and someone says "Oh, it's fixed on my machine with these private bits" that is a sign of trouble unless they fixed it in the last 24 hours.

Finally, it ensures there is a well-understood manner of building the product.  If the builds are not centralized, there is no documented way of building the product.  Being able to build then becomes a matter of having the right tribal knowledge.  Build this project, then copy these files here, then build this directory, then this one.  Having a centralized and well-understood build process is a sign of a mature project.

If you are looking to improve your build process, there are plenty of tools out there to help.  The oldest and probably most-used tool is make.  It's also probably the hardest to use.  It has a lot of power, but it pretty quirky.  I've heard good things about Ant but I haven't used it.  It seems to be taking the place of make in a lot of projects.  The latest Visual Studio has a new build tool called MSBuild.  Again, I've heard good things but I haven't used it.

Thursday, November 1, 2007

Don't Blame the User for Misusing Your API

A conversation the other day got me thinking about interfaces.  As the author of an interface, it is easy to blame the users for its misuse.  I think that's the wrong place for the blame.  An interface that is used incorrectly is often poorly written.  A well-written one will be intuitive for most people.  On the other hand, a poorly-designed one will be difficult for many people to operate.  Sure, properly educated users could use even a poor interface and even the best interfaces can be misused by incompetent users, but on average a good interface should intuitively guide the users to make the right decisions.  I'll focus mostly on programming interfaces but the same facts are true for user interfaces.

One of the rules in our coding standard is that memory should be released in the same scope in which it was allocated.  In other words, if you allocate the memory, you are responsible for making sure it is released.  This makes functions like this violations of our standard:

Object * GetObject(void)


    return new Object(parameters);


Why do we say this shouldn't be done?  It is not a bug.  Not necessarily anyway.  If the person calling this function just understands the contract of the function, he'll know that he needs to delete the Object when he's done with it.  If there is a memory leak, it's his fault.  He should have paid close attention.  That's all true, but in practice it's also really hard to get right.  Which functions return something I need to delete?  Which allocation method did they use?  Malloc?  New?  CoTaskMemAlloc?  It's better to avoid the situation by forcing the allocation into the same scope.  This way there is much less room for error.  The right behavior is intuitively derived from the interface itself rather than from some documentation.  here's a better version of the same function:

void GetObject(Object * obj)




This second pattern forces the allocation onto the user.  Object could be on the stack or allocated via any method.  The caller will always know because he created the memory.  Not only that, but people are accustomed to freeing the memory they allocate.  The developer calling this function will know that he needs to free the memory because it is ingrained in his psyche to free it.

Here's another example I like.   I've seen this cause bugs in production code.  CComPtr is a smart pointer to wrap COM objects.  It manages the lifetime of the object so you don't have to.  In most cases if the CComPtr already points at an object and you ask it to assign something new to itself, it will release the initial pointer.  Examples are Attach and Operator=.  Both will release the underlying pointer before assigning a new one.  To do otherwise is to leak the initial object.  However, there is an inconsistency in the interface.  Operator& which retrieves the address of the internal pointer, p, does not release the underlying pointer.  Instead, it merely asserts in p!=NULL.  CComPtrBase::CoCreateInstance behaves similarly.  If p!=NULL, it asserts, but happily over-writes the pointer anyway.  Why?  The fact that there is an assert means the author knows it is wrong.  Why not release before over-writing?  I'm sure the author had a good reason but I can't come up with it.  Asserting is fine, but in retail code it will just silently leak memory.  Oops.  Who is to blame when this happens?  I posit that it is the author of CComPtr.

When someone takes your carefully crafted interface and uses it wrong.  When they forget to free the memory from the first GetObject call above, the natural inclination as the developer is to dismiss the user as an idiot and forget about the problem.  If they'd just read the documentation, they would have known.  Sometimes it's possible to get away with that.  If your interface is the only one which will accomplish something or if it is included in some larger whole which is very compelling, people will learn to cope.  However, if the interface had to stand alone, it would quickly be passed over in favor of something more intuitive.  Let's face it, most people don't read all the documentation.  Even when they do, it's impossible to keep it all in their head. 

Very many times the author of the API has the power to wave a magic wand and just make whole classes of bugs disappear.  A better written API--and by that I mean more intuitive--makes it obvious what is and is not expected.  If it is obvious, people won't forget.  As the author of an API, it is your responsibility to make your interface not only powerful but also intuitive.  Bugs encountered using your interface make it less appealing to use.  If you want the widest adoption, it is best to make the experience as pleasant as possible.  That means taking a little more time and making it not just powerful, but intuitive.

How does one do that?  It's not easy.  Some simple things can go a long way though. 

  • Use explanatory names for classes, methods, and parameters. 

  • Follow the patterns in the language you are writing for.  Don't do something in a novel way if there is already an accepted way of doing it. 

  • Finally, be consistent within your interface and your framework.  If your smart pointer class releases the pointer most of the time something new is assigned to it, that's not enough.  It should be true every time.  To have an exception to the rule inevitably means people will forget the exceptional situation and get it wrong.

As I said at the beginning, this same rule applies to user interfaces.  If people have a hard time using it, blame the designer, not the user.  For those of you old enough to have used Word Perfect 5.x, recall the pain of using it.  There was no help on the screen.  Everything was done by function key.  F2 was search.  Shift-F6 was center, etc.   The interface was so unnatural that it shipped with a little overlay for your keyboard to help you remember.  Is it any wonder that the GUI like that in Microsoft Word, Mac Write, Final Copy (Amiga), etc. eventually became the dominant interface for word processing?  People could and did become very proficient with the Word Perfect interface, but it was a lot easier to make a mistake than it should have been.  The more intuitive GUI won out not because it was more powerful, but rather because it was easier.  Think of that when designing your next API or User Interface.  Accept the blame when it is used incorrectly and make it so the misuse won't happen.  Keep it easy to use and you'll keep it used.

Wednesday, October 31, 2007

Happy Halloween!

It is once again Halloween which, here in the U.S. means a time when all the kids dress up in costumes and go door-to-door "trick or treating" (which means begging for Candy).  I like this holiday.  It's fun to see everyone dressed up at the door.  Unfortunately around here at least, the tradition seems to be waning.  Malls, businesses, churches, etc. all have their own trick-or-treat times and few kids come to the door.  I can count on my fingers the number of groups we see in my neighborhood.  Sigh.  The upside is that with so few kids, we give out full-sized candy bars.  :)

Microsoft of course is one of those businesses which offer trick-or-treating.  The kids get dressed up and walk office to office.  Most people bring in candy.  It feels a lot like cheating.  When I was a kid we had to work for our candy.  Now, they get a whole bag-full in less than an hour.  Kids these days...

Monday, October 29, 2007

Vinyl Better Than CD?

An amazingly lucid discussion of the benefits of Vinyl over CD (or lack thereof) is going on over at Slashdot right now.  So far the trolls are straying away.  If you want some understanding of dynamic range compression, sampling, etc.  Check it out.

For the record, I'm in the CD is better camp.  It handles the frequencies that humans can hear (with perhaps a very small minority left out) and is much more stable.  Unless we are mastering CDs for our dogs to listen to, 44.1 is probably sufficient for a consumption medium.  Sure, there are mastering issues on many CDs right now but that's not the fault of CD but rather the process.  A CD without all those issues could easily be produced.

Saturday, October 27, 2007

A Little C= 64 Love

Here's a fun one for the weekend.  A retrospective of the Commodore 64 and it's place as a great game machine.  The C= 64 sold something like 17 million units and is, to this day, the single greatest selling computer model of all time.  My first computer was a Commodore 128 which was basically an expensive C64 with a lot of worthless hardware in it.  I don't think I ever did much with it other than run C64 programs.  Well, when I programmed in Basic I did so in C128 mode.  You could renumber lines there. 

Thursday, October 25, 2007

Testing A Daily Build

It is becoming accepted in the industry that teams should produce a build on a daily basis.  Every project at Microsoft does this as do most projects elsewhere.  If you happen to be on a project that does not, I suggest you work to get one implemented soon.  The benefits are great.  After a daily build is produced though, what next?  What do you do with it?  Here is my suggested work flow.  This is for a large project.  If yours is smaller, feel free to collapse some of the items.

  1. Deal with build breaks - If anything failed to compile, jump on it right away.  Drag developers out of bed and get it fixed.  Either that or back out the offending checkin (you are using a content management system aren't you?) and build again.
  2. Build Authentication Tests (BATs) - The first thing to run against a build are the BATs.  These are test cases that merely ensure that the build is not entirely broken.  These should ensure that all the right files were produced, that the setup works and places files correctly, etc.  Some very basic functionality may be tested here as well.  If the BATs fail, have someone look into the cause immediately and get it fixed.  Do not proceed with any more work on this build until BATs pass.
  3. Build Verification Tests (BVTs) - BVTs are a set of tests to verify basic functionality in your build.  These are not comprehensive tests.  Rather, they are limited in scope and time to the things that matter most.  I'd recommend you ensure that these complete within an hour.  If these fail, the build must not be released for further testing.  Developers must be called in and fixes generated quickly.  I've talked about these previously.  It is worth repeating a little here though.  BVT failures are not acceptable.  These are the canaries we take with us into the mine.  If they fall over, it's time to head for the exits.  Only put tests into the BVTs that meet the criteria that you're willing to block a build's release when they fail.
  4. Functional Tests - This is most of the rest of your test collateral.  These tests are initiated upon completion of the BVTs.  The functional tests contain all of your automation and any manual tests you deem worthy of being run on a daily basis.  These can take all day to execute if you want.  It is acceptable for functional tests to fail.  Each failure should generate a bug that is tracked.  The point of the functionals is to get a feel for the true state of the product.  Everything you want to work should be tested here.  When most (all?) of your functional tests are passing, you know you have a product ready to release to the world. 

That's it for the daily testing regime.  However, that's not all for testing.  There are other tests you probably want before you are ready to release.  These include:

  • Stress Tests - Make sure your code can work under stressful conditions like repeated use, high CPU usage, and low memory.
  • Longhaul Tests - Ensure that your code will continue to work over long periods of time.  The exact amount of time depends on the usage model of your tests.
  • Customer Acceptance Tests - Make sure the customer is happy with the product.  This is usually a series of manual tests that verify that the usability is acceptable.

Wednesday, October 24, 2007

New MSDN Tester Center

MSDN now has a home for test information.  Check out the new MSDN Tester Center.  It has articles, videos, and a collection of blog posts all revolving around the idea of testing.  If you are a tester or test developer, bookmark this site.  It looks like it will be useful.

Tuesday, October 23, 2007

What Self-Taught Programmers Are (Often) Missing

Some self-taught programmers can hold their own with the best coders out there.  Others, although smart people, are fundamentally less good at programming.  While there is variation among classically trained coders too, they are on average better than their self-taught peers.   Why is that?  Why can't self-taught programmers become as good in similar numbers?  There are a lot of reasons for this I'm sure, but here is one item that I believe has a large effect.  Self-taught programmers often have very little exposure to good programming. 

It is not uncommon for a self-taught programmer to work largely alone or with others with similar training.  Programmers don't tend to read a lot of others' code.  If they do, they read the small snippets they find on Google or Live.  They don't read whole programs.  Admittedly, it is hard to do this.  What gets lost without reading full programs is the way they are laid out.  Reading a sample on the web demonstrates how to call an API, but doesn't show how to organize code into a larger cohesive whole. 

I often compare programming to writing.  Someone who just knows the syntax of a language and tries to program is like someone who knows grammar and tries to write.  Technically, it will be correct, but it might not be much fun to read.  Good writing is more than just proper words in proper sentences.  It is the way the story flows that separates a good book from a poor one.  It is similar in programming.  It is the way the objects and functions interact that separates good programming from poor.

Think about a good writer.  How did she learn to be a good writer?  Practice.  But also exposure to other good writing.  Someone who is a good story teller usually exposes herself to other good stories.  I suspect that most good writers are, or were, avid readers too.

What does this tell us about becoming a better programmer?  Like a good writer, it is important that one is exposed to good examples.  In college, this comes in class.  You are exposed not to large programs--although that happens sometimes--but rather to the ideas that make good programs.  Exposure to algorithms, data structures, OS principles, design patterns, etc. all contribute to building up a repertoire to draw upon and emulate.  Self taught programmers often don't get this exposure.

This leads to a simple solution.  If you are a self-taught programmer, get exposure to the ideas of others.  This can be done either in the classical manner by reading books on databases, compilers, operating systems, patterns, etc. or by reading the code of programs that implement these ideas.  If you work in software development, read the modules written by the senior team members.  Don't read for syntax, but try to understand how the big pieces work together.  If you don't, go look at sizeable open source projects.  Become familiar with how the pieces fit together.

Monday, October 22, 2007

More Amiga History From Ars Technica

Ars just released another edition of its history of the Amiga series.  The first deals with the purchase of the Amiga by Commodore.  I'll be updating this post as new articles in this edition are posted.

Part 4 - Enter Commodore

Part 5 - Postlaunch Blues

Helping Groups Succeed

or What to do when you aren't in control but neither is the leader.

A while back I wrote about providing clarity as a leader.  As part of that essay I mentioned some techniques for keeping groups on track.  Those are well and good if you are the leader, but what if you aren't?  What if the leader of your group didn't read my post and is making a mess of things.  It's common for someone to be in a position of leadership but not be leading.  This usually results in meetings that are contentious, long, and don't bear fruit.  If they do produce anything, it comes at a tremendous price.  What should you do if you are caught in such a meeting?  Below are some techniques that will help.

First, it is important to get a good feel for what is causing the failure.  If the cause of failure can be understood, then the solution can be derived from there.  Many meetings fail because there is no shared vision.  There are two very important items that must be shared by all participants for a meeting to be successful.  First, there must be a shared vision of what the outcome should be.  What is the specific decision the group is trying to make?  What is the goal of the design?  How detailed does the design need to be?  Second, there must be a shared vision of the rules.  It must be understood how you are going to make the decision.  Without this shared vision, there will be a lot of time spent talking past each other, driving toward different agendas, following rabbit trails, etc.

Given that situation, what are the things a person can do to help the meeting succeed?  There are two primary tactics that I've seen work.  First, help form consensus.  Second, help bring things back on track.

Forming consensus involves several actions.  It means listening carefully to what is being said.  If two or more people are coming from the same or similar places, point this out and try to get other members of the group to agree.  A poor leader will let these similar voices get lost in the noise.  Stepping back from one's own agenda to try to point out and support the development of consensus is important.  It can help to move the meeting forward.  If there is no consensus forming, try stepping back from the immediate decision.  Is there a more fundamental decision that, when decided, could help constrain the current decision to more tractable territory?  If so, lead the group to that other decision.

Once consensus is formed, it is common for non-germane conversations to take place.   An interesting, but not relevant topic may be discussed.  Someone may bring up a new point on a decision already made.  If these are the case, it is important that someone bring the group back on track.  Point out that the conversation is straying and then bring up a point that is on topic.  Do so in a friendly manner.  You don't want to be seen as bossy. 

There is one other tactic which can work but doesn't always.  That is, grab the power.  Whoever is controlling the pen or the keyboard has a lot of power.  Offer to take the notes or write on the white board.  This gives you the opportunity to have influence on what gets written.  If you see consensus starting to form, just write it down.  If there is something tangential, don't.

Saturday, October 20, 2007

Another Project BBQ is in the can

It's October and that means it is once again time for Project Bar-B-Q.  This is the premiere computer music/audio think tank event.  It's a gathering of 50 people from all across the spectrum.  There are those who make audio hardware, operating systems, audio for games, midi controllers, etc.  Again this year I had the opportunity to attend.  It's been very enjoyable and quite productive.  I get the opportunity to meet many new people and learn new things.  It's a great chance to comingle with those who use our technology and see what they need us to change for them.  I also get to rub elbows with a lot of people I wouldn't otherwise meet and learn about new technologies and ideas.  I always come home recharged and ready to work hard to change the world.  This year is no exception.

Friday, October 12, 2007

Interviewing the Experienced

This week there was an interesting conversation over on Slashdot.  The subject of the post is an age discrimination suit against Google.  However, the discussion has gone to other interesting places.  The question is being asked if there is a difference in the way you should interview experienced people vs. those just out of school.  It reflects something I've come to understand after years of interviewing.  The software/IT industry is relatively young.  This is true both in terms of the workforce and the maturity of the work ecosystem.  The result of this is that we don't tend to have very sophisticated interview processes.  To be sure, they are good, but they are not flexible enough.  They are aimed at hiring the young hotshot recent graduate.  They are not usually designed to find someone with experience.  That will have to change as the median experience in the industry raises.

What is it that biases interviews toward those recently in school?  The questions we ask favor those who have graduated in the near past over those who have been in the industry for a decade or two.  It is pointless to ask a newly minted graduate about his/her experience.  They will likely have a few group projects and perhaps a summer internship to talk about, but the projects are all limited in scope.  Even someone who has been in the workforce for a few years will likely have been implementing someone else's design.  Assessing non-trivial design skills is hard and still harder in an hour-long interview.  So often we don't evaluate this.  Instead, we turn to the easily measurable.

We, as an industry, tend to ask a lot of questions that are more about particular knowledge than about problem solving skills.  Sure, we couch them in terms of problem solving skills, but they are really about particular knowledge.  Asking someone to describe OS internals (processes, memory, garbage collection, etc.) is biased toward someone who has recently taken an OS class or the few who work on the guts of operating systems.  It's not that most experienced people can't understand them, it is that they don't need to.  Many coding questions are about syntax or simple tasks no one faces in real life.  Asking someone to swap the words in a string without using extra memory isn't about problem solving as much as it is about knowing the trick.  Expecting someone to be able to characterize the running time of some algorithms is a test for how long it has been since they last took an Algorithms class.  I've even seen people take the simple question of "sort this list" and turn it into "can you implement quicksort in your head?"  Why would anyone need to?

As developers grow in experience, their work changes.  While the specifics of an implementation are still important, the harder work is about the interaction of objects.  Design becomes more important than syntax.  Strategy more important than tactics.  This has an impact on their specific knowledge.  Everyone knows that most of what we learn for a CS degree is of little value in the industry.  Why then should it be surprising that those whose graduation date has a different 3rd digit than that of the current year don't have fast recollection of such facts?  There is only so much information a mind can hold at a time.  It is thus more important to know where to find an answer than to be able to recall the answer off the top of one's head.

If the test of tactical information is not what we need on bigger projects and is not what experienced programmers are best at, how do we change the interview process to dig out the strategic skills we should be looking for?  Unfortunately, I don't have any great answers.  I'm hoping some of you can help here.  My best advice is to think about the questions you are asking.  Ask yourself if the skill required to solve it is something an experienced programmer would use every day.  If not, don't ask the question.  Instead, look for questions that are more central to work people actually do.  Also, make sure to expand the repertoire of questions to include design questions.  These take more time to think up than "code a fifo queue" but are a better judge of the utility someone will bring to the team.  Don't be afraid to ask softer questions about a person's experience.  What were the difficult problems the candidate ran into on their projects?  How did they handle them.  These questions will elicit low-value responses from a recent grad but a cornucopia of information from someone who has been around the block a few times.

Thursday, October 4, 2007

Understanding MP3 Compression

Another great article from Ars Technica.  This time about MP3 compression.  If you'e ever wondered how MP3 works, this is a great article to start with.  No math is necessary.

Thursday, September 27, 2007

Digital Audio Primer

Ars Technica has a new primer up describing digital audio.  A good read if you want to introduce yourself to the concepts.

Wednesday, September 26, 2007

Saying Goodbye to Illinois

I'm about to head out from my trip to the University of Illinois, Urbana-Champaign (UIUC).  I've enjoyed my trip.  I was able to interview 23 bright students.  The quality was very high.  The school should be proud.  I also had a chance to see the places and some of the people I've heretofore only seen on the grainy end of a web broadcast.  It was fun to get to see the rooms where my classes are being held and to meet some of the professors I've had the opportunity to take classes from.  During my stay I had a chance to try some of the local cuisine.  Timpone's is a fancy Italian resaturant with a great atmosphere and good food.  I also tried Papa Del's pizza.  The deep dish is amazing.  Bring a big stomach though.  Finally, I grabbed lunch one day at Zorba's.  They specialize in gyros and have a fun local atmosphere.  Lots of college students seem to eat there and the walls are covered with newspaper clippings of the UIUC sports teams.

Tuesday, September 25, 2007

Enjoying Pandora


I've really been enjoying listening to Pandora lately.  It is a net radio service that builds a "station" for you based on your tastes.  You begin by entering a song or an artist you like.  It then plays music it thinks is similar.  You are allowed to give the selections a thumbs up or thumbs down.  Based on your input, it refines the programming.  This is a great way to discover new music.  Today I started with Evanescence and discovered Leaves' Eyes and Nightwish.  Now if someone just would just make a full-featured Pandora module for Media Center...

Monday, September 24, 2007

What Is A Microphone Array?

One of our program managers, Richard Fricks, just had a piece posted on the Windows Vista blog talking about microphone arrays.  He describes what microphone arrays are, what they are good for, and how Windows Vista enables support for them.  If you use the built-in microphone, you should understand this feature.  It has the potential to provide a dramatic boost in the sound quality.  Expect to see more and more mic arrays appearing, especially in laptops.

A Little Design Advice

A recent article on InfoWorld lays out "The eight secrets that make Apple No. 1."  There are many things in the article that I disagree with but there are two that stick out as good advice for software design.

The first "secret" is that engineering supports design and not the other way around.  Traditionally the software industry has done engineering first and design second.  This might be because CS schools emphasize code and don't give you a better grade for prettier UI.  It could be because programming originated in a command-line environment where a friendly UI wasn't really possible.  It could just be because the typical programmer has a bad sense of style.  Whatever the reason, it's true that more software is written engineering first than design first.  Try using software written 10+ years ago and you'll see an extreme example of why it shouldn't be.  In today's world where things like WPF and AJAX give us so much flexibility in UI, there is no excuse to constrain the user experience based on what is easy to engineer.  Part of the reason the original Mac turned out as well as it did was because Steve Jobs was such a fanatical advocate for it to look good.  He was ahead of his time. 

This can be taken too far.  The Macintoshes of the mid to late 1990s looked nice but the engineering below it was failing.  They were slow and flaky.  They didn't even have pre-emptive multitasking yet. The Amiga had it in 1985.  A solid engineering base is definitely required but the part that interacts with the user needs to be designed from the user backward.

The second "secret" I want to mention is what the author calls, "You can't please everyone, so please people with good taste."  I disagree with the recommendation to just target the high end.  Walmart made a lot of money following the opposite approach.  The key takeaway here though is to target a customer segment.  Don't target the whole market.  Trying to satisfy everyone is a great way to satisfy no one.  You'll get higher customer satisfaction if you solve a few people's needs 100% than if you solve everyone's needs 80%.  The siren song of targeting everyone is always high.  There's more money in the whole market than in a niche.  However, trying to get that money is easier if you target a niche, satisfy it, then move on to the next niche.  It also means that your customers will be more sticky.  If your product is an 80% solution, that means there is 20% that people don't like.  That's 20% that if someone solved, customers would just to the competing product.  If a customer has a 100% solution, she will be very loyal and unlikely to jump to a different supplier.

Friday, September 21, 2007

Metrics of Software Quality

This post over on TestingReflections brings up an interesting point.  Michael answers the question, "What are the useful metrics for software quality" with another question.  He asks, in a roundabout fashion, what is it that we value about the software?  He rightly points out that some of the things we normally associate with software quality may not be what we're looking for in some circumstances.  He suggests these metrics as possibilities:

  • Is it testable?
  • Is it supportable?
  • Is it maintainable?
  • Is it portable?
  • Is it localizable?

These are all great questions that will affect many software projects.  A few more that come to my mind that often matter are:

  • Does it meet the specification?
  • Is it extensible?
  • Is it documented (both the code and for the customer)?
  • Is it deterministic (does it behave the same each time)?

Michael points out that none of his metrics--and really none of mine--are quantifiable.  They are qualitative metrics.  The bind testers often find themselves in is that they are asked by management to produce numbers (pass rates, code coverage, complexity values, bug trends, etc.) to describe quality.  While those are useful pieces of information, they do not accurately describe the state of the program nor of the code.

Thursday, September 20, 2007

Visiting UIUC

If you follow this blog, you'll know that I'm currently working on my Masters in Computer Science through the University of Illinois, Urbana-Champaign.  I really like the program I'm in.  Most classes are real.  There are real people meeting on campus 3 times a week in a classroom and those of us online get to join in via a camera.  It's strange though taking classes at a school you've never been to.  You see the whiteboards in plenty of rooms you've never set foot in.  Next week I'll have the opportunity to change that.  I'm going back as part of a recruiting trip to UIUC and I'll get to spend a few days on the campus.  I look forward to getting a better feel for where things located and just experiencing the campus and surrounding area.  I might also try to go look up a professor or two that I've taken classes from.  Should be fun.

Wednesday, September 19, 2007

Do We Still Need Test Developers?

In my post, Test Developers Shouldn't Execute Tests, Antony Marcano asked if we actually need test developers or if developers would do.  If the more traditional testing tasks are being done by one group and the automation by another, does it even make sense to have a test development role any more?  The answer to this question is interesting enough that it warrants its own post.  The answer I've come to is yes.

The skills employed by test developers are very similar to those utilized by developers.  In fact, I've often argued that test developers are merely developers in the test organization.  I stand by that description.  There are some differences that we'll see below, but it is largely true.  The vast majority of the time a test developer is working his craft, he'll be writing code which will look very similar to the code written by developers.  If the product is a framework or platform, the code a test developer writes is the same code written by all developers utilizing the framework.  The constraints on the code are different.  The level of polish is usually not as high.  The execution path for test code is very constrained and so the code need not be as robust in the rest of the path.  High quality is, however, still very important.  Tests cannot be flaky and often have long lifespans so correct behavior and maintainability are important items.  These are the same sorts of things developers are required to produce in their code.

So why do we need this class of people?  The skillset, while similar, is also different.  Also, it is important to have a role that is set aside just for this task.

Test developers must think differently than developers.  Whereas a developer must worry about the positive cases, a test developer focuses her efforts on the negative cases.  She is more interested in the corner cases and "unexpected" input than on the primary method of operation.  This is even more true today in the world of increasing use of unit tests.  Test developers also must focus on end-to-end and integration testing.  This is often the hardest to automate and requires a different approach than the typical unit tests take.  Developers could write this code, but are not accustomed to doing so.  Having a person who does this all the time will produce better refined solutions.  In short, test developers will have a greater sense of where to look for bugs than someone who does not spend most of their day trying to break things.

There are particular methodologies specific to test development.  Developers will be unfamiliar with these.  I'm thinking of model-based testing, data-driven testing, equivalency classes, pairwise testing, etc.  Being adept at each of these is a skill that requires development.  A person dedicated to test development will learn them.  Someone recruited for only a short time to write some tests will not.

Additionally, there is a benefit to having a dedicated test development corps beyond just the specific skills.  If everyone is thrown into a common bucket, testing tends to get underfunded.  The money is not made directly by testing.  In fact, you can usually skimp on testing for a while and get away with it.  Over time though, a product will quickly degrade if it is not actively testing.  Having a sandboxed team responsible for testing ensures that proper time is spent testing.  No one can steal cycles from the test development team to implement one more feature. 

It's also important to have a group that is not invested in the product the same way that dev is.  A good tester will be proud to hold up the product if it deserves it.  Developers will, subconsciously, be rooting for the product to succeed and may overlook things that are wrong.  They'll also approach the product with assumptions that someone involved in the creation will not.  What is obvious to everyone who wrote the product may not be obvious to anyone else.

Monday, September 17, 2007

Apportion Blame Correctly

This is a follow-on to my post about Managing Mistakes.  There is a particular kind of blame I've seen several times and which I feel is especially unwise.  The situation is this.  A person, we'll call him Fred, is working with another team to get them to do something.  He needs to get a feature into their product.  This feature is important to your team.  Fred does all the right things but the other team drops the ball.  They don't respond.  They slip their deadlines.  They don't include the feature.  The trouble is, while Fred did all the right things, he could have done even more.  He could have called one more time.  He could have camped out in their offices.  He could have paid more close attention to the other team's schedule.  In short, he could have made up for their failure through extra work on his part.  Usually, however, these extra steps are only obvious in hindsight which is, after all, 20/20.

The all-too-typical response of management is to blame Fred for the failure.  After all, he could have avoided the problem if he just did a little more work.  Management is frustrated and embarrassed that the feature didn't get completed as planned.  Fred is in their team.  Fred is thus the easy target.  Fred does deserve some of the blame here.  It's true that he could have done more.  But Fred wasn't delinquent in his actions.  He followed the usual steps.  The other team had agreed to do the work.  If they had done their part, everything would have worked out.  Unless Fred's manager is better than average, he'll be receiving all of the punishment even though he's only partly at fault. 

There is a term in the legal field for this sort of blame.  Joint and Several Liability is when two parties sharing disproportionate amounts of the blame are held equally liable for the full cost.  Wikipedia gives this example:  If Ann is struck by a car driven by Bob, who was served in Charlotte's bar, then both Bob and Charlotte may be held jointly liable for Ann's injuries. The jury determines Ann should be awarded $10 million and that Bob was 90% at fault and Charlotte 10% at fault.  Under joint and several liability, Ann may recover the full damages from either of the defendants. If Ann sued Charlotte alone, Charlotte would have to pay the full $10M despite only being at fault for $1M. 

This strikes most people as unfair.  Charlotte is only responsible for 10% of the damage and should thus pay only 10% of the penalty.  Applying an analogous standard in the workplace is equally unfair.  If Fred is only 10% responsible for the failure, he shouldn't receive all of the penalty associated with it.  He shouldn't be reprimanded as if it is solely his fault.  Unfortunately, Fred is the most proximate target and thus often receives all of the blame.

Instead, this is a good time to apply the Tom Watson Sr. school of management.  Don't blame Fred for the whole problem.  Instead, use it as a learning experience to teach him what he could do better next time.  Doing so creates a grateful employee instead of a discouraged one.  Fred is more likely to stick his neck out again if it isn't cut off the first time.  He's also smart.  He won't let the same mistake happen a second time.

Friday, September 14, 2007

Test Developers Shouldn't Execute Tests

This view puts me outside the mainstream of the testing community but I feel strongly that test teams need to be divided into those that write the tests (test developers) and those that execute them (testers).  Let me be clear up front.  I don't mean that test developers should *never* execute their tests.  Rather, I mean that they should not be responsible for executing them on a regular basis.  It is more efficient for a test team to create a wall between these two responsibilities than to try to merge them together.  There are several reasons why creating two roles is better than one role. 

If you require that test developers spend more than, say, 15% of their time installing machines, executing tests, etc. you'll find a whole class of people that don't want to be test developers.  Right or wrong, there is a perception that development is "cooler" than test development.  Too much non-programming work feeds this perception and increases the likelihood that people will run off to development.  This becomes a problem because in many realms, we've already solved the easy test development problems.  If your developers are writing unit tests, even more of the easy work is already done.  What remains are trying to automate the hard things.  How do you ensure that the right audio was played?  How do you know the right alpha blending took place?  How do you simulate the interaction with hardware or networked devices?  Are there better ways to measure performance and pinpoint what is causing the issues?  These and many more are the problems facing test development today.  It takes a lot of intellectual horsepower and training to solve these problems.  If you can't hire and retain the best talent, you'll be unable to accomplish your goals.

Even if you could get the right people on the team and keep them happy, it would be a bad idea to conflate the two roles.  Writing code requires long periods of uninterrupted concentration.  It takes a long while to load everything into one's head, build a mental map of the problem space, and envision a solution.  Once all of that is done, it takes time to get it all input to the compiler.  If there are interruptions in the middle to reproduce a bug, run a smoke test, or execute tests, the process is reset.  The mind has to page out the mental map.  When a programmer is interrupted for 10 minutes, he doesn't lose 10 minutes of work.  He loses an hour of productivity (numbers may vary but you get the picture).  The more uninterrupted time a programmer has, the more he will be able to accomplish.  Executing tests is usually an interrupt-driven activity and so is intrinsically at odds with the requirements to get programming done.

Next is the problem of time.  Most items can be tested manually in much less time than they take to test automatically.  If someone is not sandboxed to write automation, there is pressure to get the testing done quickly and thus run the tests by hand.  The more time spent executing tests, the less time spent writing automation which leads to the requirement that more time is spent running manual tests which leads to less time...  It's a tailspin which can be hard to get out of.

Finally, we can talk about specialization.  The more specialized people become, the more efficient they are at their tasks.  Asking someone to do many disparate things inevitably means that they are less efficient at any one of them.  Jack of all trades.  Master of none.  This is the world we create when we ask the same individuals to both write tests and execute them repeatedly.  They are not granted the time to become proficient programmers nor do they spend the time to become efficient testers.

The objections to this systems are usually two-fold.  First, this system creates ivory towers for test developers.  No.  It creates a different job description.  Test developers are not "too good" to run tests, it is just more efficient for the organization if they do not.  When circumstances demand it, they should still be called upon for short periods of time to stop coding and get their hands dirty.  Second, this system allows test developers to become disconnected from the state of the product.  That is a viable concern.  The mitigation is to make sure that they still have a stake in what is going on.  Have them triage the automated test failures.  Ensure that there are regular meetings and communications between the test developers and the testers.  Encourage the testers to drive test development and not the other way around.

I know there are a lot of test managers in the world who disagree with what I'm advocating here.  I'd love to hear what you have to say.

Thursday, September 13, 2007

Managing Mistakes

A promising young executive at IBM was involved in a risky venture that lost $10 million for the company.  When Tom Watson Sr., the founder and CEO of IBM, called the executive to his office, the executive tendered his resignation.  Watson is reported to have said, "You can’t be serious. We’ve just spent $10 million dollars educating you!”

If you've ever watched The Apprentice starring Donald Trump, you will have seen a different approach to handling failure.  Every season goes something like this:  The best people step up to lead.  They do well for a while but eventually make a mistake.  Trump berates them for that mistake and they are fired.  By the end, only the weakest players are left because they stayed in the background and didn't make themselves a target.

Mistakes inevitably happen.  As a manager, when they do, you must choose how to react.  You can choose to act like Tom Watson or like Donald Trump.  In my experience, I have seen managers make both choices.  I have seen both sorts of managers be successful.  However, those who emulate Trump do not usually have happy organizations.  Watson's response engenders love.  Trump's fear.  Both are good motivators, but ruling by fear has serious consequences for the organization.  If you want the most from your team, you want them to be motivated by love.  People will simply go further for love than they will for fear.  They'll also stick with you longer.  If you rule by fear, people will only follow as long as they think you can offer them something.

Here is a real world example.  As a test manager, there have been times when we've missed things.  Late in the product cycle, or worse, after we shipped, a significant bug is found.  The scrutiny then begins.  My manager will usually start asking pointed questions.  At this point there are two ways to react.  The first is to get upset.  "How could we have been so stupid that we missed this thing?"  "Only incompetent people wouldn't have done the obvious things which would have led to this being found."  The pressure to take this approach is high.  There is emotion involved.  Something went wrong and our most base instincts say that we must demand payment.  There is also the desire to deflect the blame.  "It's not my fault.  Sam screwed up."  Jumping on Sam about this will scare him into thinking twice before making a mistake again.  There are problems though.  The fear of being blamed for failure will distort team behavior.  People will be more careful, but they'll also be less honest.  They'll deflect the blame.  They'll hide mistakes.  It's possible that the real cause will remain hidden.  If it does, the same mistake will repeat.

The second reaction is the one I strive to take.  My rule of thumb is that if something goes wrong, I don't ask who messed up.  I ask my team to tell me not what went wrong, but rather how we'll avoid this mistake next time.  I don't get mad unless they make the same mistake twice.  Then they aren't learning.  As long as they are learning from their mistakes, I am not upset.  This causes the team to react much differently when things go wrong.  They will be supportive of the efforts to find out what happened.  They will be more open about what led up to the mistake.  They'll also work hard next time not to make mistakes.  Not because they fear being chewed out, but because I've been supportive.  They'll want to do well because they respect me and doing well is what I expect of them.

Wednesday, September 12, 2007

Practice, Practice, Practice Makes Perfect

I was sent a link to this article as a followup to my post about learning to program over a long period of time.  The article isn't about programming but rather about comedian Jerry Seinfeld.  When he was young and working to be a comic, he had a particular technique for learning that applies to the act of learning to program.  Jerry knew that to be a good comedian, he had to practice a lot.  The more he practiced writing comedy, the better he would become.  Similarly, the more you practice programming, the better programmer you'll become.  Jerry employed a technique for reminding him to practice.  He kept a calendar and every day he practiced, he would mark that day off the calendar.  After a while he would have a long streak with no breaks.  Then the incentive to not slack off but instead to continue the streak was high.  This technique can be employed in programming.  While you don't necessarily need to program every day (although that wouldn't hurt), you do need to practice regularly.  Pick a program you want to work on.  Now break that up into small steps each taking a few hours or perhaps a day to accomplish.  After you have the list, it is just a matter of ensuring that every day (or two or some other regular basis), you accomplish one.  Don't just program when you feel like it.  Force yourself to do it regularly.  If you wait until you get the urge, you may have long gaps and your growth will be slow.

Tuesday, September 4, 2007

You Can't Learn To Program In A Hurry

A friend turned me on to this essay from Peter Norvig entitled Teach Yourself Programming in Ten Years.  In it the author attacks the idea of the "Teach Yourself C++ in 21 Days" kind of books.  They make it look easy to learn to program.  Unfortunately, it isn't.  You can't become a good programmer in a month.  In fact, you can't become a good programmer in a year.  You don't need to go to college to learn how to program but it helps.  Learning on your own is a lot of work.  Learning to program is like learning anything of value.  It takes a lot of practice.  The essay quotes research indicating it takes a decade to master any significant skill. 

Norvig gives some good advice on those who want to become programmers.  Most of it, like just get out there and program, is great advice but is also common.  A few though, are less commonly stated and deserve repeating.  Among these are:

  • Work on other people's programs.  Learn a program written by someone else.  This will teach you how other people have solved problems and will teach you how to write maintainable software.

  • Learn a half dozen languages.  Learn languages that have different approaches to programming.  Each new sort of language will open your horizons and help you see problems in different ways.  Solving a problem in C will lead you in different directions than solving the same problem in Smalltalk.  Unless you know both languages, you'll likely fail to see one of the solutions.

  • Understand what's going on below your language.  Modern programming languages are so high level that it's easy to forget that there is a real machine running the code.  However, as Joel Spolsky says, all abstractions leak.  That is, there will always come a time when you have to understand the layer below the abstraction to solve a bug.  If you understand how memory works, how drives work, what makes a processor stall, etc. you'll be better off.  I see this often in those we interview.  They understand a programming language but they don't understand the operating system.  If you don't know the difference between a thread and a process or if you cannot describe how virtual memory works, you'll be at a loss when things go wrong.

There is advice here for many people.  If you are learning to program, the application is obvious.  Not so obvious is that if it is going to take you a long time and a lot of practice, you're going to have to put in work on yoru own time.  You won't learn to be a good programmer without spending your evenings/weekends doing so.  However, there are other things we should take away from this.  If you are a manager and you are trying to grow young programmers or new programmers, you need to give them explicit time and opportunity to program.  No one can just take some classes or read some books and become a programmer.  For those who are experienced programmers, it is a good reminder to read the code of others.  If you work at a big company, you can look at the code of those around you.  Seek out those who are more skilled than yourself and examine how they solve problems.  If you are at a smaller company, seek out open source projects.  See how they do what they do.

Here's a piece of advice that the essay doesn't mention:  rewrite your programs.  Each time you'll have a better understanding of the problem domain and thus you should be able to solve the problem in a more efficient way.  You'll learn how much you've improved when you see your old code and you'll learn to approach the problem in a new way.

Read the whole essay.