Thursday, December 31, 2009

Resolved: To Blog More in 2010

As the year closes I look back and see that my blogging really dropped off this past year.  I intend to try to blog more over this upcoming year.  My position at work has changed from a lead to a manager and that gives me a whole new perspective on things and a lot of new ideas to blog about. 

I hope your 2009 went well and that 2010 goes even better.  For those who are hurting in this bad economy, hang in there.  The sun always rises even if the night is long.  Don't give up improving your skills.  It will pay off in the long term.

Monday, November 23, 2009

A Taste of Stack Overflow DevDays

If you missed Stack Overflow DevDays, there is some audio from it available on Stack Overflow Podcast #71.  I wish there was a longer version of this.  It’s only about 1/2 hour of outtakes from the conference, but it is still interesting to hear.  These snippets are followed by a long discussion with some of the speakers.  The conversation rambles and the audio quality is poor so feel free to stop listening after the conference outtakes.

Thursday, November 19, 2009

Design Patterns Are Not Outdated

A comment left on my answer to a question over on Stack Overflow has me a little worked up.  I've seen this meme come out of programmers more and more and it just doesn't seem accurate.  The statement goes something like this, "Design Patterns were only useful because C++ (or Java) was so broken."  The implication is thus that design patterns belong in the dustbin of history now that we have moved on to more enlightened languages like Python or Ruby.  In this particular instance the commentor was talking about the strategy pattern.  His (?) assertion was that the need for the strategy pattern is not present in a language with first class functions.  My response is three-fold.  First off, this argument is historically ignorant.  The design patterns came as much from Smalltalk as from C++.  Second, it is not strictly true.  First class functions alone don't obviate the need for the strategy pattern.  Finally, providing an alternative implementation does not make the first implementation bad.

The argument that design patterns generally are due to flaws in C++/Java are historically innaccurate.  Design patterns (like much of modern programming) originated in the Smalltalk community.  Smalltalk is dynamic.  It has first-class functions.  Most of the flaws pointed to in C++ which "modern" languages solve did not exist in Smalltalk.  The patterns, then, have value independent of that flaws and the accused language.  For instance, the Strategy pattern is covered not just in the Gang of Four (where it happens to have a C++ example), but also in the Design Patterns Smalltalk Companion which points out that it is used in Smalltalk in the MVC framework at the heart of Smalltalk's GUI.  The controller is a strategy pattern.  It is also used ImageRenderer and a few other examples.

First class functions do not, by themselves, obviate the need for a strategy pattern.  This is more a nit than a real argument, but it remains true nonetheless.  To say a language has first class functions means merely that functions can be passed as data types.  It does not necessitate that the language implements closures.  Closures are a way of capturing the context of the function.  They can be used to retain state between calls to a particular function instance.  Without closures, there is no state.  Without state, many of the strategy pattern uses fail.  Consider for a moment using a strategy pattern to encapsulate encryption algorithms.  Without closures (or objects), you would have to pass the encryption key to the function every time it was used.  This is possible, but not terribly elegant.

The existence of an alternate implementation does not make the original implementation any less useful.  The fact that I can get much of the power of OO out of first class functions and closures does not mean OO is now of no value.  There are advantages to both techniques.  Empirically there does appear to be power in OO that is not captured (easily) by purely functional languages.  Most successful functional (or psuedo-functional) languages have adopted OO features eventually.  See Python, Ruby, Common Lisp, Scala, etc.  All added or have object-oriented features.  Let us return to the example of the strategy pattern.  Is its utility obviated by the use of first class functions plus closures?  In many cases it is.  Certainly it could be in the encryption example.  On the other hand, strategies are often more complex than mere functions can express.  The controller in MVC is a strategy.  In anything but a toy application, the controller will consist of multiple functions.  Sure, one could create these functions with the same closure and thus share state, but is that really a superior model?  I would argue that it is not.  In this case it would seem a less clear mechanism because the fact that the functions are tied together is less discoverable.  OO languages and functional languages each do things differently.  Things that are easy in one are more difficult in the other.  Neither is superior in all respects.

It should be noted that when I say "design patterns" above I am referring to it in the common sense of the object-oriented programming design patterns made popular by the Gang of Four book.  In a more general sense, each language has its own set of patterns and these can also be thought of as design patterns.  Some of the OO patterns are specific to OO languages and the need doesn't translate to functional languages.  Others are more "in the large" and likely translate well to all languages trying to solve big problems.  It is my claim, however, that most of the GoF patterns will be useful in any OO language.  They are not artifacts of particular implementations such as C++ or Java.

Wednesday, November 18, 2009

Is there really a benefit in lossless audio formats?

Lossless codecs are all the rage amongst those who aspire to be audiophiles.  Whether it is ripping CDs in a format like FLAC or WMA Lossless or listening the TrueHD track on Bluray movies, there are those who swear by it.   Most audio formats like MP3, AAC, and WMA are lossy formats.  They compress the audio by throwing away parts that humans theoretically cannot hear.  This is called “perceptual coding.”  Lossless codecs don’t throw away any information but instead compress more like Zip.  Lossless formats require a lot more space to store and a lot more bandwidth to transmit.  Are they worth it?  Can people really hear the difference?

TrustedReviews says no.  They go so far as to suggest than anything over 192kpbs MP3 is virtually impossible to differentiate.  “[A] few people in the last six months or so - people who take their audio gear seriously and have spent thousands of pounds on Hi-Fi equipment - have admitted privately to us that 256kbps MP3 is easily good enough for serious listening, and that they struggle to hear much difference over 192kbps MP3 in many situations.”  They conducted some A/B listening tests to see if ordinary people could perceive a difference.  The results did not support the extra expense and size of lossless formats.  In fact, most people couldn’t even differentiate between the 192kbps MP3s and a FLAC encoded version of the same songs.  The test wasn’t scientific, but there’s a pretty good chance it matches what those reading this blog will experience. 

Considering that most people listen to their music in noisy environments, on suboptimal speakers, or on tiny headphones from an MP3 player like the Zune, the chances they will ever be able to perceive the differences in audio are quickly diminishing. 

The short of it:  don’t waste the space ripping everything to lossless unless you plan to do a lot of transcoding in the future.  Don’t go out of your way to get a TrueHD movie setup.  AC3 is going to be just fine.

Friday, November 13, 2009

A Review of a Kindle

Six months ago I purchased a Kindle 2.  I originally bought the Kindle to make travelling easier.  I tend to carry a lot of books with me when I take a trip and those books get heavy.  With the Kindle, I could carry just this one device instead of 5 books.  The Kindle didn’t disappoint.  It weighs less than the typical paperback book.  It fits nicely in my Scottevest jacket.  I typically have about 80 books on mine at any given time giving me plenty of potential reading material.  If that isn’t enough, there is the Kindle store with some 360,000 books.

The Kindle satisfied the purpose I bought it for, but has exceeded my expectations.  Not only do I use the Kindle when travelling, but it has become my preferred reading device.  The screen is a delight to read on.  The contrast may not be quite what it is on a real book, but it is plenty good.  The screen on the Kindle is much more comfortable to read on than the screen on a phone or a laptop.  There is no refresh rate and no backlighting.  This results in a significant reduction in eye fatigue.  I can read on the Kindle as easily and as long as I can read a paper book.

In addition to being a great place to read, there are several features of the Kindle that make it my preferred reading tool.  The first is the built-in dictionary and the second is the ease of taking notes.  When reading a dead tree book, if I come across a word that I don’t know, I will usually guess at the meaning from the context and move along.  With the Kindle, I can just move the cursor over the word in question and get a definition at the bottom of the screen.  In this way I am able to understand the nuances of the text and expand my vocabulary.  The Kindle is also a great place to take notes.  Want to add a note?  Just start typing using the integrated keyboard.  Want to highlight some text?  Move the cursor to the start, press down, move to the end, press down again.  The notes and highlighted areas are collected in a text file that you can upload to your computer.  They are also available on  The notes will follow the book to other devices (like the new Windows software).

So what isn’t to like?  The Kindle is an excellent book-reading platform.  It is a single-task device.  It is great at what it does and not good at anything else.  It has a built-in web browser, but it is the sort of thing you would only want to use in case of emergencies.  It does not render pages well, is difficult to navigate, and is very slow.  For instance, when composing an e-mail via either hotmail or gmail, it writes the letter, deletes it, then writes it again.  It is very easy to get well ahead of the cursor even on the limited keyboard.

The Kindle supports MP3 playback, but not in any useful fashion.  You cannot see the songs.  You cannot select the songs.  You can skip to the next song, but that is all.  There is no shuffle.  Playback happens in the order the songs were put on the Kindle.  To say this feature is limited is an understatement.

The Kindle only reads its own formats.  It can read variations of the mobipocket format but cannot read pdf, epub (the standard ebook format for everyone else), or even the encrypted mobipocket format found for free at many libraries.  This choice perplexes me.

The note-taking is simple and works well, but it is capped.  If you highlight too much of a book, the highlights will continue, but the material will not end up in the notes file.  This wouldn’t be quite so bad if you were warned but you aren’t.  Instead you find out later when you go to the notes file and see a warning instead of the highlighted text.

Perhaps the most disappointing part is the lack of software innovation going on.  As someone accustomed to the rate of innovation on other devices, it is disappointing to see no new firmware or features being pushed.  The hardware platform is stable, but why not improve the mp3 playback?  Why not add new formats?  Why not add support for tags or folders?

A few questions and answers:

How is the battery life?  It is amazing.  With the wireless left on, it will last several days.  With the wireless off (and there is no reason to leave it on), it will last weeks.

Is it economical?  No.  If you buy a Kindle, don’t buy it to save money on books.  Sure, they are a little cheaper than the hardcover, but maybe 10%.  At $259 for a Kindle 2, it is going to take a long time to make up the difference.  If you watch, there are many free books available which can help, but it is not a cheap device. 

Are most books available?  That depends what sort of books you like to read.  I have found that a large percentage of what I read is available.  I still run into many books I want that are not available, but I’m not running out of books on it either.

How about technical books?  Surprisingly, it is pretty good.  I have read a few programming books on it including Programming Clojure and Javascript:  The Good Parts.  It renders them just fine.  Where it falls down is in the random access.  I don’t recommend using it for reference material.

How does it compare to the Nook?  I don’t know.  I haven’t used the Nook.  It appears to have superior hardware in most respects, but the book pricing is much worse.  If I were making the choice today, I would still choose the Kindle.  The Nook does appear to be giving it a run for its money though.  Strong competition is probably what the Kindle needed.

Do you recommend the Kindle?  Yes.  Highly.  If you like to read, get one. 

Wednesday, October 21, 2009

StackOverflow DevDays

I spent the day at Benaroya Hall for the 1st (annual?) StackOverflow DevDays conference.  Overall eight speakers took the stage on topics from .Net MVC to Python to the Google App Engine.  The room appears to hold just over 500 people and it was filled to capacity with programmers.  There were some vendors in attendance including, Fog Creek Software, and someone showing off HexBugs.

The day started off with a short video titled Scrumms which was a funny spoof on life at Fog Creek and the StackOverflow podcast.  It was quite entertaining.  I hope they release it on the web after the conferences complete in a few weeks.

Joel Spolsky was the first speaker.  I always enjoy reading and listening to him.  This speech was not a disappointment.  He was as entertaining as ever.  The subject matter under discussion was that of design elegance.  He began by pointing out that software often gives the user too much choice.  Often times it interrupts the user’s work flow to ask a question most users are unprepared to answer.  Other times there are options pages with myriad options which no one could know enough to actually use.  What is a “trusted user” on GMail anyway?  He cited a study where one store put out a lot of varieties of jam (24?) for people to try.  Many did, but substantially fewer actually purchased than when the same store put out only a half dozen varieties.  People are intimidated when given too much choice.  Joel recommended the book, The Paradox of Choice.  He then went on to talk about the simplicity offered by companies such as 37 Signals whose products do only one thing, but do it well.  He argued that this isn’t the solution.  Customers will demand more features and to grow, a company must grow its feature set.  More sales leads to more features.  Choice must be offered then, but how to do it in a way that doesn’t alienate the customer?  The solution Joel offered up was to make sure the choices support what the user is doing.  The users should be modeled and choices aligned along the path of their behavior.

Joel was followed by Scott Hanselman who gave an overview of the ASP.Net MVC framework.  This is a web framework built on top of the ASP.Net framework, but which exposes much more direct control to the programmer.  For instance, they now have direct control of their URL’s (yeah!).  Scott was an entertaining speaker although I think he was a bit too self-deprecating about Microsoft.  He spent most of the talk showing the audience various features in existing and upcoming Visual Studio products which make ASP.Net MVC programming easy.

Next up was Rory Blyth talking about iPhone development.  He was an engaging speaker who obviously knows a lot about what he is doing.  I had never looked at iPhone or Objective C development before.  I can’t say I’m terribly impressed.  The tools like adequate, but aren’t as good as Visual Studio or even Eclipse.  Objective C looks like a mishmash of C and Smalltalk.  Rory described learning to develop for the iPhone as the Stockholm Syndrome where you eventually come to love your oppressor.  The iPhone is an attractive target to develop for from a business perspective (maybe), but the SDK doesn’t appear to be the reason people are flocking to it.  One highlight at the end was when Rory showed Novell’s MonoTouch which allows for C# development targeting the iPhone.  This looks like a slick environment even if it is a little pricey.

Following Rory came Joel again with a sales pitch for FogBugs 7.  I have to say I was impressed with the empirical scheduling.  There is a new plugin called Kiln for code reviews which looks alright, but I think I prefer the UI from Malevich better.  For instance, the comments didn’t appear inline with the text when they were being made.  They did later when the submitter saw them though so perhaps I just missed something as Joel blazed through the UI.  If I were a startup, I would definitely consider using FogBugz to handle my planning and bug tracking needs.

Lunch was catered by Wolfgang Puck and was reasonably good consider it was a conference and the whole thing only cost $99.  They divided up the tables into discussion topics (great idea!).  I went to a table about programming languages, but others included agile methods, startups, and a few I forgot.

The first speaker after lunch was Cody Lindley from Ning who was talking about jQuery.  jQuery has been on my list of topics to learn about, but never made it to the top of said list.  It’s fascinating technology, especially considering I just read Crockford’s Javascript:  The Good Parts recently and got a feel for the language as it was meant to be used.  For those that don’t know, jQuery is the most popular Javascript framework right now.  It runs on 35% of all Javascript pages on the web and 21% of all web pages of any kind.  It’s primary use is to make manipulating the DOM (the page structure) much easier.  Boiled down it lets a programmer easily select some portion of the page and apply an effect to it.  In implementation, it appears to be a Javascript function that wraps up a collection of page elements into a class which then provides methods for manipulating the set.  Cody used for his demo which appears to be an online javascript testing and debugging tool.  It looks very nice.

Daniel Racha from Nokia was next up to talk about Qt (pronounced "cute" not "Q-T").  This is a cross-platform UI toolkit and development platform recently purchased by Nokia.  It is also the basis of the K Desktop Environment (KDE) for Linux and is where Webkit originated.  Webkit is the rendering engine that powers Safari and Chrome.  Nokia’s plan is to use Qt as the basis for its phone app development story.  The technology and the tools are both mature and highly capable.  Daniel did a good job selling the merits of Nokia’s tool chain.  Considering the toolkit supports 6 platforms right now (Windows, Mac, Linux, plus several phone OS’s), I can see how this might be the way for cross-phone applications to be written.  Daniel also mentioned the Nokia N900 which apparently is completely open source to the point where end users could upload their own OS.  I can foresee 3rd party variants like those created for the Linksys routers.  This could be an interesting challenge to Apple’s iPhone strategy.

Ted Leung from Sun came to talk about Python.  This is another language I haven’t had time to learn yet.  His slides were terribly hard to read (purple on black—seriously?), but the content was good.  He gave a quick overview of the language basics and then talked about more advanced features like destructuring, generators (simple continuations), decorators, and extensions.  Python has definitely taken a lot from the world of functional programming.

Dan Sanderson was next up and gave an introduction to the Google App Engine.  It looks like an interesting way to build out scalable web sites.  This is the competitor to Amazon’s S3 and Microsoft’s Azure.  It supports the Python and Java virtual machines and so sites can be implemented in Python, Java, or any language that targets the JVM (Clojure, Scala, JRuby, etc.).  With that kind of support it would seem an app programmed to it would be capable of being moved, but that would not be the case.  The app engine is a very different sandbox to play in.  The database is non-relational and doesn’t support SQL (or at least fully SQL), the filesystem is different, there is no direct network access.  In short, once an app is written to the App Engine, it won’t easily run anywhere else.  The environment looks intriguing, but you are putting your company’s fate into Google’s hands.

The day ended with Steve Seitz from the University of Washington talking about some of the advances in 3D image extrapolation.  Steve is behind much of the technology that became PhotoSynth.  Steve’s talk was light on programming content but high on “Wow!” factor.  This was a great way to end the day.  I’m not sure my mind could have taken another heavy talk.  The stuff Steve showed us was mind blowing.  They are able to take regular photographs, process them, and recreate the 3D scene.  Not only that, but newer technology allows for a walkthrough of the site and even full texturing.  You can see it here.  The best is this one:

Overall the day was well spent.  I got to learn about a lot of cool new technologies and renew my excitement for programming.  Now I just have to pick one and find the time to learn it.  If the event takes place again next year, I definitely intend on going.  Thanks to Joel Spolsky and the folks at Carsonified for putting on such a great conference.

For more, find other reviews on Meta.Stackoverflow or watch the Twitter stream.


Wednesday, September 30, 2009

Forging a Team Identity

For a group of coworkers to have a chance of becoming a team, they must share a common sense of purpose or identity.  Dave Logan in Tribal Leadership calls this a “Noble Cause.”  On small teams this often comes naturally.  Everyone is working on the same project or related set of features.  As teams become larger, their goals become more dissimilar and team identity becomes harder to forge.  It is up to the leader to forge this team identity.

Having a unifying cause (whether noble or not) is important to get the most out of a team.  If people are all working toward a common goal, they will make the right compromises to do what is best for the team as a whole.  If there is no unifying cause, people will be working to optimize locally which is usually at odds with global optimization.  Each person will be trying their hardest, but if they are pulling is different directions, some of their effort will cancel out the effort of others.  Having a unifying cause does not guarantee that people work in concert, but it is certainly a prerequisite.

How does one go about finding a unifying cause?  First look at the obvious candidates.  If the whole team is working on a particular product or feature area, just use that.  In my past I have unified teams around the concept of working on audio in Windows or on being the video team.  Sometimes there is no single feature to focus on.  In my last position I had 3 teams working for me.  Each had a distinct area to work on.  We were all part of the Windows organization, but we were such a small part that we couldn’t take that as our identity.  Each team even had its own identity, but as a group of leads, we didn’t.  What Joe was working on didn’t relate much to what Jane was working on.  I was convicted by Tribal Leadership and Good to Great that we needed a point of unification, but there wasn’t a product we has in common.  It was time to forge an identity rather than find one.

The unifying principal I chose was becoming better managers together.  This is something we all had in common, being managers, and something we could help each other with.  Even if the technologies we were working on didn’t form a conceptual whole, our positions did.  Toward this end we made sure to have a lot of discussions about managing people.  We would discuss situations and how to handle them.  We started a weekly “book club” where we would read a chapter of a book each week and discuss it in our leads meeting (more on this in a future post).  It worked well.  The team began to gel work work together.  People began forming triad relationships rather than being dyadic.  That is, they started helping each other rather than merely reporting everything to me.

It is important that a team, whether it be a team of ICs all working on the same feature or a group of leads reporting to a manager, have some common identity.  This in turns requires a goal or a unifying principal.  If there are no obvious candidates to be found, identity should be forged from something less obvious.  In the second case, it is easy to operate without a unifying goal, but things will run less smoothly.  Be intentional about ensuring each team has a common identity.

Tuesday, August 4, 2009

Own the Feedback

Some time ago I was at a management training course. The group was divided into those who were managers of managers known in this course as M2s and those who were what I have been calling leads–that is managers of individual contributors–which they called M1s. I was part of the M2 group. The M1s were divided into groups of 10 or so and an M2 was assigned to each group as the Manager and another as merely an individual contributor. A couple of the M2s–myself included–formed the executive committee and were not involved in the working groups at all the first day. On the second day we changed things up and I wound up as an IC on a team. I knew I could not come into this group that didn't know me and act as the leader. It wasn't my role and it wouldn't be accepted. I tried really hard to play the supporting role.

When I arrived, the group was in some trouble. They were in the middle of wordsmithing their mission statement (this on day 2!!). To make it worse, all 10 members were involved in this and it was becoming a marathon session. In an effort to help out, I made several suggestions and tried to ask questions to point them in the right direction. I didn't tell them how to act. They could freely take or leave my suggestions. At least, that is how I perceived it.

At the end of the 2nd day there was a feedback session. Many of the members said something to the effect “I felt you were trying to take over the team.” My initial reaction was to challenge the validity of these statements. Not out loud of course, but internally. I had no intent in taking over. In fact I had been trying hard to avoid taking over. I carefully crafted my suggestions in such a way that they were not instructions, but just offerings of opinion that carried no weight of authority. If I had been actively avoiding taking over, these people must be in error in their judgement.

When the M2s gathered I shared this experience with someone who had become a mentor to me. His sage advice was, “You have to own the feedback.” What he meant was that there had to be truth in there. Even if the feedback wasn't an accurate representation of reality–I was *not* trying to take over–it was an accurate representation of their perception of reality. Something about the situation and my actions had caused this perception. I could either accept that and look for the cause or I could reject that and learn nothing.

I chose to accept the criticism and looked for the root cause of their perception. I was totally new to the situation. These people did not know me. Even though my suggestions were not intended as instructions, they were perceived that way because I was perceived as the new guy and new guys shouldn't act with authority so fast. In their minds I was the new hire and didn't understand the situation. I had built up no relationships, no human capital with them and thus had no implicity trust. I thus talked with more authority than someone in my position should have. It wasn't so much me but the position I held. I needed to act more in line with their expectations. I needed to build the relationship before giving advice.

This analysis was born out later as people gave me the feedback that I had done less to try to take over as the session wore on. In reality, I had initially been rejecting their feedback and so hadn't changed. My actions were the same late on day 2 as they were early that day. The actions hadn't changed, but the perception had. I had built up some human capital and so my actions were being perceived differently. The specific takeaway is to know your perceived position and to act within it. Perception matters. It is important to build relationships before trying to affect change.

The more general takeaway is to always own the feedback. When people say they perceive you in a particular way, that cannot be argued. The fact is, whether you intended it or not, you were perceived in that manner. The only two options are to accept the feedback and act on it or to ignore it and burn the relationship. The world is too small to go around cavalierly burning relationships. Owning the feedback means accepting that you did something to create the perception. It could be reality. It could have been that I really was trying to take over the group. Or it could be merely perceptual. I had acted in such a way so as to appear to want to take control even though I didn't. It is important to discern which of these is true because the solution is different in each case.

This is advice I end up giving often at review time. Someone gets a tough message and wants to challenge it. “But I didn't do …” or “But that's not how it really happened.” These responses are not taking responsibility for the event. They are not examples of owning the feedback. The person proffering these responses will not fix the problem because they are externalizing blame for it. The true problem lies with someone else and so they have no responsibility or even ability to fix it. My response to this is consistent. The reality is that this is the perception. If they didn't do what they are accused of, then they did something to cause someone to think that they did. Whatever that was, they need to look for a way to address it. If not, they'll be getting the same review next year and at that point it will become a trend.

Monday, July 27, 2009

How to Interact with Your Team as a Manager

As one moves from being a lead (manager whose reports are individual contributors) to a manager (manager whose reports are leads), there is an important decision to be made about how to interact with your skip-level reports. That is, how should a manager handle his interactions with the individual contributors reporting to his leads. There are two ends of this spectrum and managers often gravitate to one end or the other. The first option is to bypass the leads and go directly to those on the front lines. The second is to route most interactions through the leads. Both have their advantages and disadvantages. Where to position yourself between these two ends of the spectrum is not an easy decision to make.

If a manager decides to bypass her reports and go directly to the individual contributors (ICs), she has direct knowledge of how things are progressing. She develops a direct relationship with the ICs. Things are more likely to be done the way she wants. However, there are some significant downsides to this behavior as well.

First, it is hard to scale to this level. The fact that the organization chose to have leads should indicate that the work is too big for one person to manage. If the manager can handle all of the ICs directly, the lead position is extraneous and harmful. The reality is that this manager is unlikely to have enough time to closely monitor the work all of the time. Her interaction with the team will then tend toward drive-by management. She will swoop in and give direction on a particular part of a project but then lose focus before the results of the direction become evident. This can lead to poor decisions being implemented and frustration among the individuals carrying out the instructions.

Second, it can lead to discontent among the leads. They will have particular ways they want work done and a priority order for what they want done. Having their manager go directly to their reports means these instructions will be contradicted. This causes confusion among the ICs who will have conflicting priorities and goals. The lead will also feel his role being undermined by his manager. When she goes to his reports and gives them instructions, he is out of the loop and will begin to feel unnecessary or even frustrated. This can cause the lead to stop performing the duties of a lead and allowing the manager to do that work. As the manager is unable to give the same amount of attention, this often leads to a situation where no one is paying attention.

What about the alternative? Routing work through the leads. When a manager wants her team to do something, rather than going to Fred the IC with instructions, she asks the lead to ask Fred to do the work. This allows for the lead to always be in the loop. It allows the lead to ensure that there is a clear message (see blog past on providing clarity) so that the IC only has one set of priorities. It also allows the manager to scale. Rather than having to check in on Fred's progress, she can just ask her lead in their 1:1 how things are going. The details of the work can be left to the lead and the manager need only bother with the end goals. This may sound good, but there are downsides as well.

First, going through another person in communication always risks the message getting distorted. As anyone who played telephone in elementary school can attest, the more people that retransmit a message, the more it will change. In the elementary school game children are asked to sit in a line. The first child is given a message and asked to tell the next child in line. Each child in turn is to repeat what they heard to the next child. The final child will announce to the group the message he received. With rare exception, the final message is not even related to the initial one.

Second, going through another person can limit the amount of feedback received. If Jane the manager tells her lead Marcus to have his team make the iWidget program interface with the new build process, there is some chance that Jane will not learn that this is more difficult than initially conceived. If Marcus does his job poorly, he may not relay the message to Jane. This leads to frustration on all fronts. Jane is upset because the project is taking too long. Fred the IC is upset because he is being asked to do the impossible. Marcus may even be frustrated because both his manager and his report are frustrated.

The third and perhaps most insidious downside of this management approach is the lack of relationship that gets built. People will subconsciously distrust those who they do not have a relationship with. Their natural tendency is to distrust until they have reason to trust. The reason doesn't have to be large. It could just be seeing that the manager treats others fairly or having casual conversations which convey the sense that the manager is a “real person.” The result of this psychological phenomenon is that until a manager has built up social capital via relationships, she will not get the benefit of the doubt from her team. Subtly, the team will interpret ambiguous actions in a negative light. Asking for a code review will not be seen as a way to strengthen the team's coding skills but rather as a way to check up on people and “get them” if they aren't doing well enough. Mail sent asking about the status of a bug will be viewed as accusatory rather than merely inquisitive. The most insidious part is that the manager will probably never realize this is happening because she doesn't have the relationships that would provide the necessary feedback channel.

In the past year I made the transition to manager and faced this exact quandary. My decision was to route most interaction through my reports. When I needed work done, I would ask the manager to have the work done instead of going directly to the report. I knew the downsides of not letting leads do their job and wasn't going to make that mistake. Instead, I made the mistake of being too distant. I built up relationships with my direct reports, but not as much with the teams reporting to them. Based on this experience, I will be trying a more mixed approach in the future.

I still believe it is important not to bypass the leads when giving work instructions.  Yes, this has the telephone problem, but the consequences of avoiding that are too great.  At the same time, it is important to build a relationship with the individual contributors.  This means ensuring direct contact.  At the lower edge contact should be made at the individual level by wandering the halls and by skip-level 1:1s.  At the higher edge, contact should be made by sending out broad mails laying out high-level vision, by all-team meetings even if there is no business demand for them, and by occasionally attending your team leads' meetings.  The middle (direct business communication) should be left to the leads.  In initiating contact at the individual level for personal contact and at the vision level for business, you should generate enough “human capital” that the team will come to trust you and give you the benefit of the doubt.

Wednesday, July 1, 2009

Be Intentional

My old manager used to always say, “Be intentional.”  It took me a long time to comprehend exactly what he meant by this, but eventually I did and have come to appreciate the advice.  What he meant was to always make active, conscious decisions rather than just letting things happen.  It also means to verify things rather than assuming they are a certain way.  For example, if you don’t have enough time to do everything on your plate, think carefully about which items will not get done rather than just working on items in no particular order.  It should be your intent which specific items go undone.

This is a good principle to act by.  All too often people think about what they *are* doing but don’t consider what they *are not* doing.  It is just as important to be conscious about what you are not doing as it is to be aware of what you are.  If you don’t actively choose that which is not done, it is likely that the wrong things will drop off your plate.  It is easy to be busy working on something that is important to the detriment of something that is really important.  It is best to make all decisions, both positive and negative, conscious ones.  I’ll often ask my team when something goes undone whether that was intentional or not.  If there is only time to do 3 items and there are 4 that should be done, I’m fine with the 4th being dropped.  It is a poor manager who is upset when the impossible isn’t accomplished.  I do, however, hold my team accountable for that 4th item being something they intend to not get done rather than whatever just happened to be left at the end of the day.

I’ve seen this come up in testing features.  I recall a time when a report of mine was testing a particular feature with two aspects to it.  For good reasons he started working on the first part, a complex parser for device attributes.  Being complex, this took a long time to thoroughly test.  In fact, it was taking long enough that he was not going to be able to get to the second aspect of the feature at all.  I inquired whether this was really the right approach.  Did he think it was better to thoroughly test the parser and test the other part none or would it be better to test the parser to some level, then test the other aspect, and finally return (in the future) to cover the less important parts of the parser.  Upon reflection he decided it was a better idea to cover both to some extent than one fully and the other none.  The trouble here is that he wasn’t acting intentionally.  The test plan called for testing both aspects thoroughly.  The plan didn’t call for ignoring the second part.  It was just because of the unexpected difficulty of testing the parser that the second was going to be missed.  He needed to step back, re-evaluate, and decide intentionally rather than just letting events dictate what was going to be dropped.

This principle is also good to apply when dealing with other people.  Instead of just assuming that the other party will do the right thing, being intentional means specifically outlining expectations of them.  It is easy to think you’ve told someone what to do without them realizing that you did.  Being intentional means verifying that your assumptions were communicated and following up later.  It means being explicit when handing work to another person.  Make sure they understand that it is your expectation that they now have the action item before you clear it from your to-do list.

Wednesday, May 27, 2009

Five Books To Read If You Want My Job

This came out of a conversation I had today with a few other test leads.  the question was, “What are the top 5 books you should read if you want my job?”  My job in this case being that of a test development lead.  At Microsoft that means I lead a team (or teams) of people whose job it is to write software which automatically tests the product. 

  • Behind Closed Doors by Johanna Rothman – One of the best books on practical management that I’ve run across.  1:1’s, managing by walking around, etc.
  • The Practice of Programming by Kernighan and Pike– Similar to Code Complete but a lot more succinct.  How to be a good developer.  Even if you don’t develop, you have to help your team do so.
  • Design Patterns by Gamma et al – Understand how to construct well factored software.
  • How to Break Software by James Whittaker – The best practical guide to software testing.  No egg headed notions here.  Only ideas that work.  I’ve heard that How We Test Software at Microsoft is a good alternative but I haven’t read it yet.
  • Smart, and Gets Things Done by Joel Spolsky – How great developers think and how to recruit them.  Get and retain a great team.


This is not an exhaustive list.  There is a lot more to learn than what is represented in these books, but these will touch on the essentials.  If you have additional suggestions, please leave them in the comments.

Tuesday, May 12, 2009

Some Programming Languages to Consider Learning

Learning a new programming language can affect the way you think.  While most modern languages are Turing Complete and can theoretically all accomplish the same things, that’s not practically true.  Each language has its own strengths of expressiveness.  For instance, trying to write dynamically typed code in C++ is possible, but a pain in the neck.  You would have to implement your own type system to do so.  Each language makes certain things easy and other things hard.  Learning different languages then exposes you to different approaches.  Each approach provides a different way of thinking and a set of tools supporting that way of thinking.  What follows are some of the languages I’ve learned and what I think they provide.  This list is limited to languages I’ve studied in at least a little depth.  There are many more languages out there that may be useful.  If you have additional suggestions, please make them in the comments.

  • C – This is coding very close to the metal.  Learning it promotes an understanding of memory, pointers, etc.

  • Lisp/Scheme – Once you get past your parenthesis-phobia, it’s a very elegant language.  The big learnings here are treating code as data, metaprogramming (writing programs that themselves write programs), and recursion.  Lisp is also a dynamically-typed language.

  • Clojure – A variant of Lisp that runs on the JVM.  In addition to what you can learn from Lisp, it adds persistent data structures and transactional memory.  Persistent data structures are ones that *never* change.  Once you have a “pointer” to one, it will never change underneath you.  This makes parallel programming much simpler.  Clojure also is more of a functional language than Lisp/Scheme.  It is not purely functional, but allows for the functional style to be followed more easily.

  • Smalltalk – Much of modern programming originated in this language.  Modern GUIs are based on the Xerox Parc Smalltalk systems.  Object Oriented programming was first popularized in Smalltalk.  Extreme Programming, Agile, and Design patterns all found their initial formulations in Smalltalk.  In addition to learning about OO programming, Smalltalk is great to understand message passing.  This gives object-oriented code a different feel than the function call semantics of C++/Java/C#.  Smalltalk is also a dynamic language.

  • Perl – Perl was once known as the duct tape of the internet.  It ran everything.  It has since been surpassed (at least in blogosphere popularity) by other scripting languages like Ruby and Python.  The biggest thing to learn from Perl is regular expressions.  They are built into the core of the language.  Other languages support them but often as a library.  Even those that do support them in the syntax do not usually utilize them so pervasively.

  • C#/Java – These languages both solve the same problems in almost the same ways.  They are a great place to learn object-oriented programming.  It is built in from the ground up.  The OO style is one of function calls and strong interfaces (which distinguishes it from Smalltalk).  These languages also have the largest accompanying libraries.

Thursday, April 30, 2009

Inbox Zero, Take Two

A year and a half ago I tried to get to “Inbox Zero” and failed.  This is the idea that you get your inbox down to zero mails every day.  I’m making another run at it and this time have been a little more successful.  I’m not perfect, but I haven’t fallen off the horse yet either.  Here’s what I have found to work.

  • Let all interesting mail fall directly into the inbox.  Don’t use separate folders for stuff from your boss or an alias/list that is important.
  • Move non-interesting mail into a separate folder by a rule.  I have rules to shunt off aliases I find merely interesting but not important into their own folders automatically.
  • Read or skim every mail that is in your inbox.  For each, make one of the following decisions:
    • Respond.  Read it and take the appropriate action.  If you can do this in a minute or two, just do it.
    • Delete it.  You have the information or it wasn’t interesting.  Either way, you don’t need to keep it around.
    • Archive it.  You may need to refer back to it later, but you don’t need to take any action on it.
    • Mark it for further reading.  It’s not critical to act on it, but too long to read now.  Put it in a folder to read later.
    • Mark it for further action.  It will take longer than you have to respond, but a response is necessary.  Put it in a folder for later response.

Following these rules makes my inbox look something like this:

  • Inbox
    • Action Required
    • Archive
    • Read Later
  • Interests
    • Various subfolders for the non-critical aliases I am part of.

I also have a rule to move all mail sent to: or cc: me directly to my inbox.  This way mail intended for my eyes won’t get filtered into an “interests” folder.

I have found this system simple enough to keep up with it.  It also means I no longer miss mails which got filtered into some folder I haven’t yet read for today.  I now see every interesting mail and am at least aware of it.  It also helps me keep track of the mails I really need to go back and respond to.  My old system was just to leave them unread, but this got unwieldy very quickly and I never made it back to most of them.

Monday, April 27, 2009

Don’t Worship at the Altar of Accuracy

Earlier today I found myself faced with a common management situation.  I had been sent an e-mail which showed that a piece of data we were using was inaccurate.  The specific issues was what percentage of a certain test run was automated.  We had said we were at 100% and it turned out there were a handful of tests that were being run on our behalf by someone else and those were not automated.  My initial response was to investigate how big the non-automated block of tests was, why it wasn’t included, etc.  Then I stopped and thought about it.  Even if the number were as large as reported, that number would be 10% of the total test suite.  That is almost certainly an over-exaggeration.  When we make the numbers more accurate, it probably slips to 1%.  Whether we are 90% automated, 99% automated, or 100% automated, does that change anything?  Is that number going to change what I ask of my team?  Probably not.  In all cases the items that are manual are intended to be that way.  I won’t stop running them or try to automate them.  All that I will gain by going through the process of making the number more accurate is a more accurate number.  Is there value in that?  I assert that the answer is no.  A number’s accuracy matters only to the extent that the difference will change behavior.  Within some range, different numbers won’t change behavior and so are not worth expending effort increasing the accuracy.

This isn’t to say numbers don’t matter at all.  They do—but only when decisions will be made based on them.  Effort is not free.  Spending energy refining a value that is accurate enough means not expending that same energy on something that might bring more value to the team.  It isn’t hard to find something which brings more value because the value an increasingly accurate number brings is zero.  This is especially important to note as a manager.  A manager typically does not spend a lot of effort making data accurate.  He or she merely asks others to do so.  In this way the costs are hidden and thus the tradeoff not as apparent.  Beware the cost of obtaining accuracy for its own sake.  I know it is in our DNA as engineers.  Suppress your inner urges and don’t worship at the altar.  Get to good enough and stop.

Wednesday, April 22, 2009

Simple Management Tip: Tracking 1:1 Conversations

Here’s a quick tip I’ve found very handy.  When doing 1:1’s with your team (you are doing these regularly, right?), take notes to keep track of the conversations from week to week.  I currently use a 5-tab notebook with one tab for each direct report.  Each person has their own section.  Each week when we meet, I take notes on the next page in their section.  This makes it really easy to refer back to last week’s notes and follow up on any ongoing issues.  Each week I circle the items I need to follow up on the following week.  This makes it trivial to pick them out.  Having one section per person means the previous week is only one page back.  I tried just keeping a continuous set of notes on everyone, but then finding the last time we talked could be difficult. 

Another advantage of having each person in their own section is it provides a space for next week’s agenda.  During the week as things come up, I jot them down on the next week’s page.  Then when it comes time for the 1:1, I already have a list of items to follow up on.  This also helps stop my subconscious mind from dwelling on these items (ala Getting Things Done) because I know they will be handled.

I have also seen OneNote used successfully for this purpose, but I prefer not to have a laptop between myself and the other person in our meetings.  It is a matter of taste.

Tuesday, April 21, 2009

Becoming a Manager: Learning to Rely on Data

Having been a manager* for a while now, I’ve learned more about what it means and what changes it requires in thinking.  This installment of the “Becoming a Manager” series covers the increasing reliance on abstract data that is required as you move up the ranks.  Everyone who is an IC knows that upper management demands lots of charts and data.  Sometimes this makes sense.  Other times we know it distorts the reality which is apparent on the ground.

When I was a lead I managed by walking around.  I didn’t pay much attention to statistics.  Instead, I would regularly touch each of my team members.  I would have 1:1s, regular scrum meetings, and hallway conversations.  This allowed me to have a strong sense for what was going on in my team.  This worked great with 6 reports.  When I became a manager I tried to continue this same method of staying on top of what my team was up to.  The problem was, however, that I now had 20 reports.  It’s not possible to touch all of them regularly enough.  It is also a lot harder to keep the issues at play in 20 peoples’ daily work in one’s head.  Finally, most of these 20 had leads which stand in the org chart between myself and them.  Trying to manage by walking around undercuts these leads because now their reports have 2 managers, not one.  This is confusing for all involved.

What about just touching my direct reports and getting a sense for the product from them?  This would seem to work, but is not terribly effective.  Each person reports differently and normalizing the information coming from each is difficult.  It becomes worse when people use the same words to mean different things.  When I try this technique, I almost always later find out that while I’m getting the same information from each, the results on the ground in each team differ greatly.  How then to get a normalized view of what is going on at the IC level in each team without going to each person to ask?

The answer lies in gathering data across the team.  I’ve come to rely on the dreaded charts for much of my knowledge.  I have learned to take advantage of tools to track how the team is progressing in getting its work items done, how many bugs are active, what our pass rate is, etc.  This allows me to get a view at a glance of how we are progressing across several key metrics.  As long as I am gathering the right data, I can have an accurate view of how the team is doing.  Based on this data, I can know which areas are doing well and which ones need more personal attention.

The key here is choosing the right metrics to measure.  The team will optimize for whatever it is I am measuring.  A wrong metric will distort behavior in undesirable ways.  I have found it is important to track only a few items.  A small number of items can be understood by all.  These do not describe everything about the team, but they can act as the proverbial canaries which point out trouble early.  I make sure everyone knows exactly what I am tracking.  Every one of my leads knows the queries I use to monitor things.  This allows them to point out flaws in my methodology and gives them transparency so there are no surprises.  It is also important to keep the metrics stable over time.  A different chart each week gives people whiplash.

One point about using data to monitor a team:  you need to stay flexible in the use of the data.  The data is a rough facsimile of a real thing that needs to be done.  The data itself is not the goal.  I have seen too many managers confuse the data with reality.  This causes them to push for clean metrics even when this causes undesirable distortions in behavior.  If the data is showing a different state than reality, fix the metric, not the behavior.  Align the data with the team’s actions, not the other way around.  It is important to note that data hides a lot of things.  Relying solely on data is a surefire way to fail.  I still get out and walk around to validate the conclusion I’m drawing from the charts.  The data merely helps me know which areas to spend my limited time digging into deeper.

It takes a while as a manager to become accustomed to viewing the world through the lens of spreadsheets and charts.  Just as it is difficult to learn to trust others, it is difficult to learn to trust data.  It is even more difficult to learn when not to trust the same data.


* Manager means having leads report to you.  A lead is someone who has only individual contributors reporting to you.

Wednesday, April 1, 2009

I'll be posting on Twitter occasionally

As I surf the net I often run across articles of interest.  If I feel they warrant a comment, I'll post them on this blog, but most don't rise to that level.  I've decided to try using twitter as an outlet for such items.  If you want to see the articles I think are most interesting, feel free to follow me on twitter:

Tuesday, March 31, 2009

Review: Peopleware

The book, Peopleware by Tom DeMarco and Timothy Lister, comes highly recommended by Joel Spolsky and Jeff Atwood over at the Stack Overflow Podcast.  It is probably most famous for its repudiation of the idea that cubicles make a better work environment for programmers than offices.  There is a lot more to this book than just an attack on cube farms, though.  The book dates from another era of the technology industry.  It was first written in1987 with an update in 1999.  Most of the content ages very well though.  It carries a lot of sage advice that managers today would be smart to read.  Alas, the book appears to be out of print at present.  Check your local library.  That’s where I got mine from.

The book begins with a discussion of people and shipping quality software.  Among the insights are that you can’t squeeze more than a certain amount of work out of people.  If one demands a lot of overtime, people will slow down, do more of their own things on your time, and otherwise use up the time that was supposed to be gained.  The authors make the argument that quality, if allowed to be driven by the team, will be higher than if driven from above.  Peoples’ innate sense of quality is probably higher than the user demands.  Based on these statements, the authors refute the notion that the only way to get work done is to set tight deadlines.  The idea that work grows to fill an allocated space is—they say—false.  I’m not sure I fully agree.  Work does expand to fit the allocated space.  The solution is not, however, to set insane deadlines to squeeze out the work.  Instead, the solution is to set a rational deadline and keep track of progress via frequent checkpoints (ala scrum).

My favorite section is that one which talks about people.  The authors assert that great people are born, not grown.  That’s not quite true.  They are born with innate talents and then grown to greatness.  The key point though is that those born without the right abilities will never be great.  You can’t teach everyone to be a great programmer.  Sorry.  Because of this, and the difficulty in getting rid of someone once hired, it is important to set a high hiring bar.  It is better to hire and then retain the right people than to hire the average person and try to grow them into above average performers.  It’s also important to retain stars.  Invest in them. 

Beyond individuals, teams are important.  The authors spend some time talking about what makes great teams work.  Unfortunately, they don’t give a formula for creating one.  No one seems to know how to do this.  Maybe some day we’ll figure it out, but for now the consensus seems to be that they just happen.  Managers may not be able to create a well jelled team, but they can certainly prevent one from happening.  The authors calls this “teamicide” and give several examples of behavior that causes it:

  • Defensive management – Managers must trust their teams.  Attempts to succeed despite their failure only poisons the environment.
  • Bureaucracy – Paperwork and policies that are arbitrary and disrupt the work flow.  If management is more interested in paperwork than results, the team notices.
  • Physical separation – People interact better when they sit near each other.
  • Fragmentation of time – Give people only one top priority at a time. 
  • Quality-reduced Product – Management cannot demand a shoddy product or the team will stop performing.
  • Phony deadlines – Deadlines should be real (and realistic).  Fake ones to force out more work just cause people to check out.
  • Clique control – Let people group up.  It’s called a team.

There’s a lot more in this book.  If you can find a copy, get it and read it.  There’s a lot here for every technology manager.

Wednesday, March 25, 2009

Spotting the "Uncoachables"

Interesting article from Harvard talking about how to spot people who can't be coached.  The author gives 4 symptoms to look for, but they basically boil down to one.  Does the person want to change?  If someone isn't interested in changing because they don't sense a problem, are burned out, or think everyone else is to blame, no amount of coaching will get them on the right track.  The author advocates walking away from these people.  That may be an easy option for a consultant, but it usually isn't for a manger.  Sometimes you can fire the person or encourage them to move on, but this isn't always possible.  Sometimes it isn't even desireable.  The uncoachable person may have high value in another aspect of their personality.  What then? 

The only viable approach when someone doesn't perceive a reason to change is to modify their attitude.  You have a reason you want them to change.  Get them to recognize it.  Note, however, that their worldview is not necessarily yours.  What motivates them may not be what motivates you.  If someone is overly sure of themselves, telling them their actions offend other people probably won't help.  They won't care.  However, telling them their actions put in jeopardy a promotion or their project or their ability to make an impact may.  Determine what their motivating factors are and couch your discussions in terms of those values.  Then, once you have convinced them there is a reason to change, work on the change.

Tuesday, March 24, 2009

Review: The Effective Executive

I read The Effective Executive by Peter Drucker because it was highly recommend on the Manager Tools podcast.  Despite what its name may imply, it isn’t written to company executives.  Instead, Drucker defines an executive as anyone with decision making ability.  This certainly includes all managers within a modern technology company and most of the frontline staff as well.  Drucker outlines 4 major areas of concentration for becoming more effective.

The first is your time.  Here the advice boils down to measuring where you spend it.  Time is the one thing everyone has in the same quantity and you can’t get any more of it.  If you want to make effective use of your time, know where you spend it.

Once you know where you spend your time, how do you decide where to apply it?  The next piece of advice involves making a contribution.  Determine where you can most make a unique contribution to the organization and spend your time there.  Ask yourself, “What can I contribute?”  For the rest, try to delegate to others.  Set the bar high and determine what active contribution the position should be making.

Next up is building on your strengths.  This is very similar to Now, Discover Your Strengths.  Drucker advocates hiring and rewarding people for their strengths, not their weaknesses.  I think he dismisses weaknesses a bit too cavalierly.  A significant weakness can overwhelm someone’s strengths.  It can make others view them negatively which can create a negative feedback loop.  However, his advice to focus hiring on strengths instead of a lack of weakness is on point.  People will accomplish a lot more in their area of strength than in a place where they are merely not weak.

Finally, Drucker talks about making effective decisions.  Toward this end he recommends concentrating on only one thing.  Have one focused initiative at a time.  Clearly define what the “boundary conditions” are.  By this he means understanding the specifications the decision must satisfy.  Build action into the decision.  A decision without action has no impact.  Measure the effectiveness of the decision.  This ensures not only that the decision was right, but that it stays right.  He also dedicates a whole chapter to making decisions not between right and wrong but between two courses of action neither of which is clearly right or wrong.  His advice here is essentially, argue both sides.  Don’t make the mistake of jumping on an early decision.  Instead, thoroughly vet each of the alternatives.

Overall I found this a good book.  Perhaps not as good as the hype, but useful.  I found myself doubting the reviews during the first part of the book.  The advice seemed solid, but obvious.  The second part which discussed decision making, however, was much more useful.  I truly enjoy the last three chapters.

Tuesday, March 17, 2009

E-mail Is Not A Good Motivator

Another conversation I find myself having over and over is telling people that e-mail isn’t a sufficient mechanism for communication.  I already discussed how e-mail isn’t a good medium for handling disputes.  It also is not a great motivator.  In today’s world where people get hundreds of messages a day, it is too easy to ignore.  Receiving an e-mail saying “Please get this done” sometimes doesn’t work.  This is especially true if there is no inherent power in the sender.  A manager’s mails less likely ignored, but those from a peer often are.  People are busy.  It’s going to take more than just 1/100th of their inbox (much less in some cases) to prompt action.

Too many times I’ve experienced aa conversation that goes something like this:

Manager:  “Why weren’t the widget’s waxed by 5:00 for the presentation?”

Report:  “I asked <other person> to do it.  I sent mail several times.”

Report seems convinced that they are absolved of responsibility because they asked.  In e-mail.  More than once.  Isn’t that enough?  What more should Manager expect?

If it is truly important that something gets done by <other person>, mail just doesn’t cut it.  As I said, it is too easy to ignore.  A different tactic is necessary.  One that expresses the importance by the level to which Report is willing to go to get it accomplished.  “Escalation?” thinks Report.  Maybe telling <other person>’s boss about it?  No.  Not yet at least.  Escalation ruins relationships and should be used only as a last resort.

The solution is as simple as it is old.  In today’s world, it is also more unique than it should be.  Try an analog approach.  Pick up the phone and call.  Walk down the hall and stop by <other person>’s office.  It takes some effort, but it will likely garner the hoped for results.  Amazingly enough, most people react differently to human contact than they do to an impersonal e-mail.  Personal contact creates some level of relationship.  It tells the person you care enough to expend the energy.  This might communicate that you care about them as a person or it might merely tell them that you care about the work.  Either way, they are more motivated to get the job done.  Two more benefits are that a personal visit is a lot harder to ignore than an e-mail and you know the message was received.

This effect works equally well with reports as it does with peers.  If something is truly important, say it in person as well as e-mail.

Saturday, February 28, 2009

10 Papers Every Programmer Should Read

I’m always on the lookout for good reading material.  Michael Feathers over at ObjectMentor has served up a great post entitled 10 Papers Every Programmer Should Read.  I intend to.

Monday, February 23, 2009

Now, Discover Your Strengths

This is the title of the follow-up to First, Break All the Rules by Marcus Buckingham.  The first book was brilliant and really challenged the way we think about what makes someone successful at their job.  Now, Discover Your Strengths attempts to follow up on that with an in-depth discussion of “strengths.”  Strengths are a combination of knowledge, skills, and what the authors call talents.  A talent is “any recurring pattern of thought, feeling, or behavior that can be productively applied.”  Basically, it is your innate ability to do something.  If you aren’t born with a talent for, say, public speaking, no amount of training will make you Steve Jobs.  If you don’t have a talent for abstract thinking, you’ll never make a great programmer.  Sure, you can become competent at either, but you’ll never make it into the elite of your career discipline.

This sounds about right, but the authors don’t do a lot in this book to justify the position.  There is some talk about the way our brains develop neural pathways.  This may be the reason but the evidence in the book is not sufficient to really make the case.

The core of the book revolves around the premise that you will become much better if you focus on your areas where you have talents (these are your strengths) than if you spend a lot of energy trying to remove your weaknesses.  There is a lot of good anecdotal evidence for this in the stories of Tiger Woods, Cole Porter, and others. 

Unfortunately, the book spends a lot of time on StrengthFinder which is a questionnaire consisting of 180 questions, the answers to which will reveal which of the 34 identified strengths you possess.  The questions felt a lot like those you would find on a Meyers-Briggs test.  With the purchase of the book you can take the quiz once.  It will then tell you what your top 5 strengths are.  I was unimpressed with the test.  While Meyers-Briggs usually aligns well with how I view myself, this one didn’t.  It had elements I think are pretty far from my strengths and didn’t have things I think are.  Either I have a very wrong view of myself or the test is flawed at least in my case.  I suspect the latter.  Maybe I wasn’t able to understand what the questions were asking well enough.  There were several that could be interpreted in very different ways.  Whether or not the test is accurate, the information about the strengths themselves is paltry.  Each gets about a paragraph describing it and a page telling managers how to deal with someone who has it.  I’d like to see a lot more discussion for the individual what to do with their strengths.  This was almost wholly lacking.

The end of the book asserted the case that organizations should focus on strengths instead of skills.  An example of this is hiring for strengths and not specific knowledge or skills.  This may be a good idea, but I don’t feel the case was made strongly enough.  It was more assumed to be true than truly justified.  Even if true, it will be very hard to implement.  Should an organization make each interviewee take a test before being hired?  I’m sure the owners of the test would love that, but it sounds impractical.  It also ignores the ramp-up time someone with only strengths and no present skills takes to become productive.

The overall theme of the book—to pay attention to strengths and not weaknesses—seems right.  I’m persuaded that this is true, but more because of preconceived notions than because of the book.  The follow-through seemed weak.  This is disappointing because the first book in the series was truly eye-opening and much better justified.  Overall, I can’t recommend this book.  Borrow it from the library and read the relevant portions in a day or two. 

Monday, February 16, 2009

Check Out Stack Overflow

I’ve recently become quite addicted to the website  It is a joint venture between Jeff Atwood and Joel Spolsky.  There is an accompanying podcast if you want to hear about the creation process.  The site itself is a question and answer site for programming questions.  Want to know how to do a simple Perl hash equivalency comparison?  Ask.  Want to find the best book on C# for experienced programmers?  Ask.  There is quite an active community and most questions are answered in short order.  You don’t even have to sign up to ask your first question.

If you want to stick around longer than one question, you can answer questions and earn reputation for doing so.  Greater reputation means more abilities on the site.  At one level you can change the tags on questions.  At another level you can vote to close questions.  Still more reputation and you can actually edit the text of questions.  Reputation is granted by users voting for the best answers and questions.  It’s amazing how addicting it can be to try to raise an arbitrary score.

The site has only been open for  a few months and already it is a treasure trove of knowledge.  Joel recently stated that the site gets something like 2 million uniques a month.  As I write this, there are approximately 90,000 questions that have been asked.  Almost all have answers.  This is crowd sourcing at is best.  Once people start linking to it in large numbers, expect to see it shoot up the rankings of programming-related searches.

There are downsides to Stackoverflow’s popularity.  Questions don’t stay on the front page for long.  I suspect they will have to create sub-pages for different topic areas the way Reddit did with its subreddits.

Friday, February 13, 2009

Why We Conduct Bug Bashes

My team recently finished what we call a “bug bash.”  That is, a period of time where we tell all of the test developers to put down their compilers and simply play with the product.  Usually a bug bash lasts a few days.  This particular one was 2 days long.  We often make a competition out of it and track bug opened numbers across the team with bragging rights or even prizes for those who come out on the top of the list.

Bug bashes are a time when everyone on the team is asked to spend all of their time conducting exploratory testing.  Sometimes managers will influence the direction by assigning people end-user scenarios or features to look at.  Other times the team is just let go and told to explore wherever they desire.  Experience has shown me that some direction can be good.  Assigning people to explore an area they don’t usually work on gets new eyes on the product and with new eyes come new use patterns and new bugs.  Recently I’ve also discovered that it can be helpful to track where people have spent their time.  During our last bug bash we created a list of areas that should be explored and had people sign off when they had investigated them.  This gives us a much better sense of just what the coverage looked like and allows us to ensure all areas received attention.

Conducting a bug bash can be expensive.  There is a lot of work to get done and putting everything else aside for 2 days adds up to a lot of other work getting pushed off.  Why do we do this?  What is the return on the investment?  There are three primary reasons that come to mind:

We have found that empirically, a bug bash flushes out a lot of bugs in a short period of time.  Our most recent bug bash saw the number of bugs opened jump to 400% of the daily average.  This is important because we frontload the finding of the bugs.  The earlier we know about bugs, the more likely we are to be able to fix them.  Knowing about more bugs also helps us make more informed triage decisions.

The second reason we conduct bug bashes is because they are likely to find bugs on the seams.  Test automation can only find certain kinds of bugs.  Exploratory testing is a much better way to find issues on the seams—where functional units join up.  Sometimes these bugs are the most critical.  Imagine if we could have found the Win7 MP3 bug or the interaction between playing audio and network throughput before shipping the respective products.  These are the sort of issues highly unlikely to be found in test automation but which can be found through exploratory testing.  We obviously don’t find all such issues through bug bashes, but we do find a lot.

The final reason we run bug bashes is to get a sense of the product.  Most of the time we spend our days focused on one small part of the operating system or another.  It’s hard to get a sense for the state of the forest while staring at individual trees.  After spending several days conducting exploratory tests on the product, we can get a much better sense whether the overall product is doing well or if there are serious issues.

Thursday, February 12, 2009

Managing Humans

I just finished reading Managing Humans by Michael Lopp, aka Rands in Repose.  Michael is a 15-year veteran manager from Silicon Valley.  He’s worked for such notable companies as Netscape and Borland.  He has a lot of good advice based on this experience.  The book is a compilation of blog posts so you can probably get by without buying it if you really want to.  However, there is a lot of good stuff in here and having a copy to mark up is handy.

If I have something negative to say about the book, it is that it doesn’t have a well defined audience.  Despite the title, the book isn’t all aimed at managers.  Some of the book is telling managers how to think.  Some is telling employees what their managers are thinking.  Still others are aimed at those starting a company.  A 4th audience is those trying to land a job and going through the interview process.  Each of these sections have good essays dedicated to them, but it also ensure there will be essays that are useless to everyone reading the book.

The best essays cover the subjects of meetings and employees. 

On meetings, Michael dedicates several essays to the sort of people you’ll find in meetings and how to help those meetings succeed.  He also gives advice about how to determine a meeting is doomed so you can leave.

On employees, he explains the concept of the “free electron” which is that rare super programmer and how to keep them happy.  He talks about analyzing poor performance and then acting on it.  One great tip is determining what the real problem is by creating a 2x2 grid comparing motivation and skill.  In a point that hit home with me, Michael pointed out that sometimes what appears to be a motivation issue can really be a skills issue.  Someone who once had great skills but who has been coasting on past accomplishments may have low motivation because their skills have atrophied.  In this case, skills training may be the solution to the motivation issues.

If you are managing a team, read this book.  If you want to understand what your manager is likely thinking, read this book.

One more note, the writing style is very irreverent.  If you are easily offended, you may find yourself wincing at times.

Monday, January 26, 2009

Alan Kay on User Interface Design

As part of the Berkeley Webcast project, a pair of presentations by Alan Kay (of Smalltalk fame) is available.  The presentation is from the early 1980s and discusses the development of user interface design from the 1960s onward.  If you ware into computer history at all, these are very interesting.

Part 1

Part 2

The entirity of CS61A is available in podcast format if it is easier to access that way.


Friday, January 9, 2009

James Whittaker Netcast

James Whittaker is the author of books like How To Break Software.  He ran one of the few university-level testing programs at Florida Tech.  He's now as Microsoft and helping Visual Studio become better at testing.  The guys at .Net Rocks caught up with him for an interview.  James explains what he thinks the future of testing is and what's right and wrong with testing at Microsoft.  Put this on your Zune/iPod.  It's worth the hour.