Sunday, March 25, 2007

Ten Things You Should Know As A Developer

Andres Taylor gives us his top ten things he's learned about software development.  The list is insightful and definitely worth reading.  Here are my two favorites and my interpretation of them:



  • It all comes down to working software - People pay you for what software does, not how it does it.  No one is paying you to be clever.  They're paying you to get some job accomplished.  Thus, only be clever when you need to be.  This hearkens back to previous discussions on the idea of tradeoffs in software.

  • Your colleagues are your best teachers - In my words, it is better to be a little fish in a big pond than a big fish in a little pond.  One of the things I really enjoy about working at Microsoft is all the smart people I get to work with.  There's always someone who knows more than me on any subject.  We have a culture where people are willing to teach too so there's always something to learn.

Saturday, March 24, 2007

Showstopper!

I just finished reading Showstopper! by G. Pascal Zachary.  It recounts the creation of Windows NT starting with the hiring of Dave Cutler in October 1988 and ending with the shipping of the first version of NT on July 26, 1993.  The book puts a lot in perspective.  NT took nearly 5 years of grueling work.  The book spends a lot of time talking about the impact work on NT had on the personal lives of the team members.  Many didn't see their families much at all for extended periods of time.  It wasn't uncommon for people to pull repeated all-nighters.  We seem to have learned something from this in the past decade.


The book also calls out the contribution of the testing teams.  This is rare in these sort of books.  I've read about the creation of the Mac, the IMP, the XBox, etc. and almost never is testing mentioned.  It's good to read a book which recounts not only the work done by developers but also the heroic efforts of the testers.


If you have an interest in computing history or in the development of large systems, this book is a good one to pick up.  It puts you in the middle of the creation of the OS that runs on so many computers across the world.


I also ran across this interesting paragraph talking about the app-compat work:



The conflict stemmed from the differing priorities of the two sides.  Intent on refining their general model, programmers didn't want to distract themselves by fixing bugs.  Meanwhile, testers wanted to test.  This was a pointless activity when they saw the same bugs week after week. (p. 257)


That sounds a lot like what I was mentioning in my post about single-focus roles.  Each side is so focused on what it is tasked with doing that it doesn't take into account the needs of the other side.

Thursday, March 22, 2007

Some Good Advice for Managers

Some good advice for managers courtesy of the guys at Business Intelligence Lowdown.  They give 73 tips.  Not all of them are eye-opening but I'm sure there are some items here you haven't considered.  A few of my favorites:



  • Do not rake employees over the coals for mistakes that happen inadvertently. Instead, help them understand the error so they don't repeat it.

  • Indifference is as bad, if not worse, than too much interference. Take the right amount of interest in what your employees do.

  • Keep your employees within the loop. Inform them of all decisions that will affect and be affected by their work. Don't treat them as mindless machines that are used only to get the job done.

  • It's hard to swallow pride and admit that you're wrong, especially to your subordinates. Doing so will not only make them admire you more, but also make it easier for them to admit their own mistakes.

And a few they didn't include:



  • Vacations are sacrosanct:  When an employee is on vacation, don't expect them to be doing work.  Encourage them to avoid checking e-mail.  You can survive without them.  I promise.

  • Have your employees' backs:  Shield them from upper management.  Take the heat for them when they make mistakes.  If your employees believe you're on their side, they'll be more willing to do what you want.  If they think you are in the pocket of upper management, they'll think twice before tasking risks for you.

  • Listen:  You're hiring people smarter than you.  Listen to them.  They might be right.

Wednesday, March 21, 2007

Beware of Single Focus Roles

I recently attended a talk advocating combining the management of test development and development.  Some of the reasoning for this was to force the quality decisions to fall onto one person.  This makes a lot of sense.  It is important to understand why.  Many times in large software development projects we compartmentalize the various roles.  This is true not just of test and development but also roles like performance, security, customer service, etc.  This compartmentalization has a direct effect on how people operate.  When someone is responsible for only one aspect of a product, they will often make the right choices for their aspect but the wrong ones for the product overall.

In the traditional software model, test and development are two distinct silos.  The disciplines report to different people and perform different jobs.  This creates tension between the roles.  This tension arises not just because of the variance of roles but also because of the variance of purpose.  Development wants to add features and test wants to constrain them.  To ship a quality product, you need to strike a balance.  Too many features and the quality will be too low.  People won't tolerate it.  Too much quality and there won't be enough features to attract customers.  Imagine a field.  One one side is quality and the other features.  Now imagine a line drawn between them.  To increase quality, you must decrease features.  Each decision is a tradeoff.

Security is also an area that is fraught with tradeoffs.  The Windows of old shows what can happen if not enough attention is paid to security.  Trusted Solaris shows what happens if you pay too much.  Not enough attention and the system becomes a haven for viruses, bots, etc.  Too much and the system is years behind, runs slow, and is very hard to use. 

Performance can be similar.  Many changes that increase performance are intrusive.  Making them involves trading stability for performance.  Other times the performance wins are not visible to the end user.  Are they still worth making?  Finally, if you are not allowed to degrade performance, it is very hard to add new features.  Assuming your previous implementation was not poorly designed, it can be nigh unto impossible to add functionality without increasing CPU usage.

In each of these cases--and many others--a person or team tasked with improving only one side of the coin will make decisions that are bad for the product.  Recall that good engineering is about making the right tradeoffs.  To make them, one must consider both what is to be gained and what is to be lost.  When we give someone a role of focusing solely on security or performance or adding features, we skew their decisions.  We implicitly make one side of the coin trump the other.  If a person's review is based only on the performance improvements they made in the product, that person will be disinclined to care about how important the new functionality is.  If they are tasked solely with securing a product, they will tend not to consider the functionality they break when plugging a potential hole.

The right decisions can only be made at the place where a person is accountable for both sides of the tradeoff.  If the different silos (test, dev, security , performance) are too far separated, this place becomes upper management.  This is dangerous because upper management often does not have the time to become involved nor do they have the understanding to make the right decision.  Instead, it is better to drive that responsibility lower in the chain.  Having engineering leads (not dev leads and test leads) as the talk advocated is one way to accomplish this.  One person is responsible for the location of the quality line.  Another way is to increase interaction between silos.  Personal bonds can overcome a lot of process.  Sharing responsibility can work wonders.  Consider dividing the silos into virtual teams that cut horizontally across disciplines.  Make those people responsible as a group for some part of the product.  As is often the case, measuring the right metrics is half of success.

Tuesday, March 20, 2007

Selling Discrete Audio Cards Isn't Easy

EliteBastards has an interesting article speculating about the future of Creative Labs.  Creative was one of the first PC sound add-in boards (AdLib was the first that I recall).  They are certainly the most successful of the PC audio card manufacturers.


The author talks about the threats to Creative caused by the changes to the Vista audio stack, increased competition in the discrete audio board market, and the dominance of motherboard audio.  I think there are really two main factors which make life hard for Creative:



  1. Accelerated gaming audio isn't as necessary as it used to be.  Faster CPUs make it possible to mix dozens of audio channels together without the need for specialized hardware.

  2. The quality of onboard audio is going up.  With the introduction of the HDAudio specification, onboard audio is able to get closer to feature parity with discrete parts.  The new audio fidelity requirements for Windows Logo compliance are driving higher quality audio into the system and helping to remove the quality deficit found in so many last-generation motherboard audio implementations.

That's not to say that Creative is out of it.  They still have a very powerful brand name and some nice hardware in their X-Fi cards.  There are still reasons to own add-in cards.  It's easier to get audio right on a discrete card than on the motherboard with all the signal noise right next to the analog audio traces.  Creative is also the only real game in town for accelerated gaming audio and a lot of games still utilize it.


The article not only talks about the PC sound card market but also gives a decent overview of the changes Vista made to the sound pipeline.

Thursday, March 15, 2007

Interview With Dave Caulton of Zune

Windows Weekly has an interesting interview with Dave Caulton of the Zune team in this week's netcast.  Dave discusses interesting issues like:



  1. The Universal deal

  2. Zune Christmas sales

  3. Is the Zune doing as well as expected?

  4. Zune's wireless capabilities and future

  5. What about Netcasts?

Dave Caulton is well-spoken and answers the questions candidly.  I didn't get the feeling he was trying to spin.  Then again, I'm a Microsoftie too so I'm probably biased.  If you are interested in the Zune, I suggest you give it a listen.

Tuesday, March 13, 2007

Programming Language Hierarchy

My last post made me recall this programming language hierarchy I ran across some time ago.  Read it for the humor, not the accuracy.

You Can't Teach Height, But Can You Teach Programming?

There's an old basketball saying--attributed to Frank Layden of the Utah Jazz-- that "You can't teach height."  No matter how much skill you have, if you are short, you'll be at a disadvantage on the court.  You can teach someone to be a better player, but you can't make them any taller.  Recently, there's been a meme running around the blogosphere asking whether there is an analogy to height in programming.  Is there something about programming that puts it out of reach for many people? 

Some people can learn to program.  Some can't.  There's a good paper called "The Camel Has Two Humps" by Saeed Dehnadi and Richard Bornat which claims this to be the case.  The authors explain that in CS classes there are two groups of people:  those who can program and those who can't.  "Between 30% and 60% of every computer science department's intake fail the first programming course."  Many methods have been tried over the years to overcome this deficiency but to date no one has made serious progress here.  The authors speculate that this is because some people just can't handle the meaninglessness of programming.  They point out that the teaching of formal logic suffers from the same problems.  If you don't want to read the whole paper, Jeff Atwood has a good synopsis.

Are the authors correct in their analysis?  Is it this lack of meaning that causes people to not be able to program?  I'm not so sure.  My guess is that it is the abstract nature of programming that stops people from being able to program.

Most people who take math long enough eventually hit a wall.  There is some point when you can just no longer grasp what is being taught.  No matter how much you study, you'll never become proficient at that level of math.  For some people this comes early with algebra.  For many others it is geometry or trig.  A large number of people hit the wall with calculus.  Still others at differential calculus.  For me, Discrete Math is something I've never been able to master.  Often times people will do fine in math one semester and struggle to even pass the next.  Why is this?  I think it is because math is increasingly abstract.  The further you get in math, the more abstract it is.  As things get more abstract, they become harder to follow.  This isn't true just in math but even in fields as distant as philosophy.  Following Nietzche is a lot harder than reading Rawls which is harder than Orwell's 1984.  Why?  Because each is written more abstractly than the one before it.

Programming is also increasingly abstract.  Linear programming in BASIC (old-school, not VB) is something most people can accomplish.  Functions are the next rung on the ladder.  After that comes pointers.  Some people just can't grok them.  I've conducted many an interview where the interviewee wrote down foo(bar), erased it, wrote foo(*bar), then finally foo(&bar).  Next is classes.  Not every C programmer can comprehend interfaces and class hierarchies.  Fewer still can create good ones.  Templates (or "generics" as they are now called) throw many people for a loop.  It's amazing how much harder it is to write a function with a T than an int.  At each of these stages, you'll lose some people.

I'm reminded of a time when a friend of mine who is not a programmer tried to create a random home page for his browser.  He wrote some javascript which contained a series of nested if...then...else statements.  Someone else suggested he consider using the switch statement.  I wrote this:

list = new Array(

            "http://www.slashdot.org",

            "http://www.arstechnica.com",

            "http://my.yahoo.com",

            "http://www.flipcode.com",

            "http://www.powerlineblog.com",

            http://www.realclearpolitics.com);

a = Math.floor(Math.random() * list.length);

document.location=list[a];

The difference is more than just a more thorough understanding of the language syntax.  I think it is indicative of a use of higher level abstraction.  If..then is a brute force method.  It is very concrete.  We all understand this.  On the other hand, an array is more abstract and using the array object to get information about itself is not quite as intuitive as counting by hand and hard-coding it.

What is the implication of all this?  If true, you can't teach everyone to program.  As you climb the programming language ladder, people will drop off.  Anyone who can program and has read Worse Than Failure will realize that some people just don't get it.  It isn't that they are stupid, they just aren't wired for this task.  At 5'10", I'm not wired for basketball.  Spending time teaching someone who isn't wired right to program beyond their talent-level will be an exercise in frustration for all involved.

This affects the way we should interview.  If people cannot program, hiring them with the expectation that we will teachthem is fraught with danger.  The short of it is that we need to ask programmers to program during interviews (seems obvious doesn't it?).  Scott Hanselman has a great post on this.

I'll end on a slight tangent.  I wonder if this can explain some of the complaints about VB.net over Visual Basic 6.0.  Moving to .Net added classes and a lot more complexity.  In essence, it added more abstractness to the language.  If my hypothesis is right, this would make a certain number of people who were comfortable with VB 6.0 no longer capable of grokking the updated version.

Monday, March 12, 2007

Why Don't PCs Have Hardware Video Decoders?

Robert X. Cringely speculates in his latest column that Apple may soon be adding H.264 hardware decoding and encoding to its Macintosh line.  Cringely is wrong more often than he is right so the truth of this rumor is unknown.  I'm sure someone at Apple is at least studying the idea.  More interesting to me than whether or not Apple is shipping a $50 chip in all Macs to do H.264 is his talk about why no one has done this before.  He hints at a conspiracy of sorts to keep video-accelerating hardware away from the public.  I worked on DVD playback at Microsoft from nearly the inception.  There are good reasons you don't have hardware decoders in your system right now.  Let me explain.

Cringely says, "Maybe the reason is economic (save the $7) or maybe it is political (Microsoft or maybe Apple are for some reason opposed to hardware decoding). But like a lot of real reasons, I think it probably comes down to hubris and the simple fact that by decoding video in software, road warriors have another incentive to buy a more expensive -- and more powerful -- computer."  The truth is, Microsoft did work back in 1997 to enable hardware decoding in Windows.  The support shipped in the form of DirectX Media around Thanksgiving of that year.  I think it was in version 5.2 of the SDK.  This functionality first shipped with Windows in Windows 98.  Several vendors took advantage of this support.  Toshiba had an early system which shipped with DVD decoding in hardware.  I still have one of their full-length PCI boards hanging in my office.  IBM shipped the Thinkpad with hardware decoding for several years.  Dell offered an upgrade to a Luxsonor decoder card.  Creative shipped the DXR2 and DXR3 at retail.

Despite these many offerings, the market for hardware decoders never grew very big.  Why not?  Because software could do the job and it was substantially cheaper.  Whereas a hardware decoder might cost $10-$50 in COGS (just a guess), a software decoder could be had for something like $1 in volume.  Even on early PCs, the CPU was fast enough to decode the MPEG2 streams for DVD.  A Pentium 2 - 266 could almost keep up and a 333 had plenty of horsepower.  Today's operating systems and background processes sap a lot more power than did Windows 98 but there's still plenty of power there.

Rarely, however, does DVD play on a computer though without some sort of hardware assistance.  As early as the ATI Rage II+, display cards included special hardware designed to accelerate MPEG2 decoding.  On Windows 95 and its variants, this hardware was accessed in a propreitary fashion.  With Windows 2000, Microsoft created a standard called DirectX Video Acceleration (DXVA) which is used today by all DVD decoders to utilize this hardware.  Depending on the display chip, you may get more or less acceleration but the software is always offloading at least a portion of the work to the GPU.

In June of 2005 Microsoft shipped DXVA support for WMV-9.  With Windows Vista, we also support accelerating VC-1 (the variant of WMV used in HD-DVD and BluRay) and H.264 via DXVA.

So, is there a conspiracy to get you to buy more expensive PCs just to play video?  No.  It saves money to decode video without a specialized device just for video.  It is for economic reasons that there are not dedicated hardware decoders.  It's cheaper (and just as effective) to decode video in software with a hardware boost than to ship a chip to do just decoding.

So why then is Apple possibly considering a video acceleration chip in the Mac?  There are two possible reasons.  The first is that H.264 is really, really hard to decode.  It is a lot harder to decode than MPEG-2, DIVX, or even WMV9/VC-1.  Support for H.264 acceleration on graphics chips is also behind the support for other codecs.  Probably the more likely reason is for encoding.  Encoding is a lot harder than decoding (because it requires a full decode in addition to the encode portions).  If Apple wants to do something like timeshifting in H.264, it will need hardware to do it.  Media Center (now included in Vista Home Premium) usually uses dedicated MPEG-2 encoding chips for the TV functionality.

Sunday, March 11, 2007

Microsoft Announces HD-Photo

Once known as Windows Media Photo, Microsoft just released a new photo format called HD-Photo.  Bill Crow has a good writeup on the benefits of the new format.  Basically it allows for high-fidelity photo editing and storage.  While those of us in the normal world do all of our photography in JPEG format, professionals do not.  They use TIFF or a format called RAW (which is propreitary to each camera).  These formats are less aggressive in their compression and allow for greater color detail.  HD-Photo is similar but attempts to be a more interoperable standard.  It also supports better compression than the standards I mention.  This is a technology to watch.  JPEG has some serious deficiencies which something like HD-Photo can alleviate.  Two questions come to my mind:



  1. Will camera makers and photo editing suites adopts HD-Photo?

  2. Will consumers care?  Is JPEG good enough for them?

Only time will tell if HD-Photo takes off or not.  If it does, I think everyone will benefit.

Wednesday, March 7, 2007

TechFest 2007

Microsoft has a large number of pure researchers working for it.  They are collectively known as Microsoft Research (MSR).  Once a year a large contingent of these researchers gather in Redmond for an event called TechFest.  All full time employees are invited over to attend lectures and wander the "show floor" where various teams have set up booths.  The researchers are available to talk to and are more than happy to describe what it is they are working on.  There are people working on 3D graphics, podcasting, video, image detection, automated error detection in code, operating system concepts, etc.  You can see some videos of the items available here.

TechFest is interesting.  I think it is Microsoft's attempt to avoid being Xerox PARC.  For those unfamiliar with PARC, it was the Palo Alto Research Center that Xerox set up in the 1970s to explore the future of computing.  Amazing things happened there.  Among the inventions were the personal computer (Alto), Ethernet, the laser printer, the bitmapped display, and the modern WIMP GUI.  With all this cool stuff being invented, why isn't Xerox's name associated with computers today?  They were a photocopying company and didn't have an interest in actually developing any of their inventions.  Techfest is an opportunity for people in the product teams to meet the people in the research teams and hopefully cross-pollinate.  Many items from research have made their way into our products via just this method.

Monday, March 5, 2007

The Ephemeral Nature of Computers

My wife and I are talking about getting some landscaping done.  One of the interesting options we have is to get a computerized sprinkler system.  The system connects to your PC and uses the web to determine the weather.  It can use this information to make intelligent choices.  For instance, if rain is coming, it won't bother to water.  This sounds really cool until you stop to think about it.  How long do we expect the sprinkler system to last?  10 years?  20 years?  What is the chance that computers will be around in 20 years that can still run this irrigation system?  Even if we preserve one, what is the chance that whatever web service is behind it will still exist and will still have the same interface?  Zero?

One problem computers face is that they accelerate the pace of change.  Sometimes this can be very good.  It is the secret to much of the large increase in productivity this country has experienced.  However, because things change so quickly, nothing stays around very long.  Within the lifetime of everyone reading this, there will likely be several major computing platforms.  So far, we are not very good at preserving the old ones.  Try finding a machine to read a 5 1/4" floppy disk.

This, by the way, is the main reason I won't buy DRM'd music.  When formats change, I can always re-rip my CDs.  What if Apple stumbles and the iPod becomes uncool?  What will I be allowed to do with my FairPlay-encrypted music files?  I can't transcode them because they are encrypted.  I still have the CD's I bought 15 years ago in High School.  They still work.  Does anyone think that the iPod will still be using the same music format in 15 years?  If you think it will, just go talk to anyone who bought music from MSN music.

Will the web change this?  Will it preserve things better or worse?  My vote is for worse.  Take for example our digital photos.  When I have 3 gigs of pictures on my hard drive I have some hope of preserving them.  Let's say that .jpg dies out and some new format comes to dominate.  I can always run a conversion program to keep my pictures around.  Now assume that all of the photos are on Flickr instead of my hard drive.  How do I preserve them?  Sure, Flickr could just convert them for me but what if Yahoo goes out of business?  What if they just decide that they are not making money on the free storage of photographs model?  Then I have to find a way to download them all, convert them, then upload them to another site.  It can be done, but it's more custom work than just getting a photo converter program to run locally.  When we put our data into someone else's hands we lessen our ability to manipulate it which lessens our ability to preserve it.

Consider gaming.  I can still find ways to play games written 25 years ago for the Commodore 64.  What's the chance anyone will be able to experience World of Warcraft in the year 2030?

Friday, March 2, 2007

Soundscapes 101

Back in May of 2006 I was invited to a recording session at Microsoft Studios (yes, we have out own studios--very cool place) with Robert Fripp.  He's the artist brought in to help develop the Windows sound for Vista.  This was the 2nd of two sessions I'm aware he had here at Microsoft.  Robert Scoble was invited to the session and videotaped it.  I'm told that this is a rare recording of Fripp at work.  He goes through the creation of a soundscape and gives an explanation the gear he is using.  He also plays a few early versions of what became the Vista "Pearl Sound."  It amazes me that all this work was distilled down to just 4 notes.  I kinda like the power chord versions.  The video of the recording session is followed by an interview with Fripp.  The person working closest with Fripp here is Steve Ball who is one of the Program Managers I work closely with as part of the audio team in Windows.  If you have any interest in Fripp or the sound production process, I suggest you give this video a view.


P.S.  You can spot me in the white t-shirt at about the 7:30 mark.

Geeks Wanted

I thought this was too cool not to post.  For those that don't have the ASCII table memorized, the sign says "Now Hiring".

Hat tip to Kotaku.