Thursday, September 27, 2007

Digital Audio Primer

Ars Technica has a new primer up describing digital audio.  A good read if you want to introduce yourself to the concepts.

Wednesday, September 26, 2007

Saying Goodbye to Illinois

I'm about to head out from my trip to the University of Illinois, Urbana-Champaign (UIUC).  I've enjoyed my trip.  I was able to interview 23 bright students.  The quality was very high.  The school should be proud.  I also had a chance to see the places and some of the people I've heretofore only seen on the grainy end of a web broadcast.  It was fun to get to see the rooms where my classes are being held and to meet some of the professors I've had the opportunity to take classes from.  During my stay I had a chance to try some of the local cuisine.  Timpone's is a fancy Italian resaturant with a great atmosphere and good food.  I also tried Papa Del's pizza.  The deep dish is amazing.  Bring a big stomach though.  Finally, I grabbed lunch one day at Zorba's.  They specialize in gyros and have a fun local atmosphere.  Lots of college students seem to eat there and the walls are covered with newspaper clippings of the UIUC sports teams.

Tuesday, September 25, 2007

Enjoying Pandora

 

I've really been enjoying listening to Pandora lately.  It is a net radio service that builds a "station" for you based on your tastes.  You begin by entering a song or an artist you like.  It then plays music it thinks is similar.  You are allowed to give the selections a thumbs up or thumbs down.  Based on your input, it refines the programming.  This is a great way to discover new music.  Today I started with Evanescence and discovered Leaves' Eyes and Nightwish.  Now if someone just would just make a full-featured Pandora module for Media Center...

Monday, September 24, 2007

What Is A Microphone Array?

One of our program managers, Richard Fricks, just had a piece posted on the Windows Vista blog talking about microphone arrays.  He describes what microphone arrays are, what they are good for, and how Windows Vista enables support for them.  If you use the built-in microphone, you should understand this feature.  It has the potential to provide a dramatic boost in the sound quality.  Expect to see more and more mic arrays appearing, especially in laptops.

A Little Design Advice

A recent article on InfoWorld lays out "The eight secrets that make Apple No. 1."  There are many things in the article that I disagree with but there are two that stick out as good advice for software design.

The first "secret" is that engineering supports design and not the other way around.  Traditionally the software industry has done engineering first and design second.  This might be because CS schools emphasize code and don't give you a better grade for prettier UI.  It could be because programming originated in a command-line environment where a friendly UI wasn't really possible.  It could just be because the typical programmer has a bad sense of style.  Whatever the reason, it's true that more software is written engineering first than design first.  Try using software written 10+ years ago and you'll see an extreme example of why it shouldn't be.  In today's world where things like WPF and AJAX give us so much flexibility in UI, there is no excuse to constrain the user experience based on what is easy to engineer.  Part of the reason the original Mac turned out as well as it did was because Steve Jobs was such a fanatical advocate for it to look good.  He was ahead of his time. 

This can be taken too far.  The Macintoshes of the mid to late 1990s looked nice but the engineering below it was failing.  They were slow and flaky.  They didn't even have pre-emptive multitasking yet. The Amiga had it in 1985.  A solid engineering base is definitely required but the part that interacts with the user needs to be designed from the user backward.

The second "secret" I want to mention is what the author calls, "You can't please everyone, so please people with good taste."  I disagree with the recommendation to just target the high end.  Walmart made a lot of money following the opposite approach.  The key takeaway here though is to target a customer segment.  Don't target the whole market.  Trying to satisfy everyone is a great way to satisfy no one.  You'll get higher customer satisfaction if you solve a few people's needs 100% than if you solve everyone's needs 80%.  The siren song of targeting everyone is always high.  There's more money in the whole market than in a niche.  However, trying to get that money is easier if you target a niche, satisfy it, then move on to the next niche.  It also means that your customers will be more sticky.  If your product is an 80% solution, that means there is 20% that people don't like.  That's 20% that if someone solved, customers would just to the competing product.  If a customer has a 100% solution, she will be very loyal and unlikely to jump to a different supplier.

Friday, September 21, 2007

Metrics of Software Quality

This post over on TestingReflections brings up an interesting point.  Michael answers the question, "What are the useful metrics for software quality" with another question.  He asks, in a roundabout fashion, what is it that we value about the software?  He rightly points out that some of the things we normally associate with software quality may not be what we're looking for in some circumstances.  He suggests these metrics as possibilities:

  • Is it testable?
  • Is it supportable?
  • Is it maintainable?
  • Is it portable?
  • Is it localizable?

These are all great questions that will affect many software projects.  A few more that come to my mind that often matter are:

  • Does it meet the specification?
  • Is it extensible?
  • Is it documented (both the code and for the customer)?
  • Is it deterministic (does it behave the same each time)?

Michael points out that none of his metrics--and really none of mine--are quantifiable.  They are qualitative metrics.  The bind testers often find themselves in is that they are asked by management to produce numbers (pass rates, code coverage, complexity values, bug trends, etc.) to describe quality.  While those are useful pieces of information, they do not accurately describe the state of the program nor of the code.

Thursday, September 20, 2007

Visiting UIUC

If you follow this blog, you'll know that I'm currently working on my Masters in Computer Science through the University of Illinois, Urbana-Champaign.  I really like the program I'm in.  Most classes are real.  There are real people meeting on campus 3 times a week in a classroom and those of us online get to join in via a camera.  It's strange though taking classes at a school you've never been to.  You see the whiteboards in plenty of rooms you've never set foot in.  Next week I'll have the opportunity to change that.  I'm going back as part of a recruiting trip to UIUC and I'll get to spend a few days on the campus.  I look forward to getting a better feel for where things located and just experiencing the campus and surrounding area.  I might also try to go look up a professor or two that I've taken classes from.  Should be fun.

Wednesday, September 19, 2007

Do We Still Need Test Developers?

In my post, Test Developers Shouldn't Execute Tests, Antony Marcano asked if we actually need test developers or if developers would do.  If the more traditional testing tasks are being done by one group and the automation by another, does it even make sense to have a test development role any more?  The answer to this question is interesting enough that it warrants its own post.  The answer I've come to is yes.


The skills employed by test developers are very similar to those utilized by developers.  In fact, I've often argued that test developers are merely developers in the test organization.  I stand by that description.  There are some differences that we'll see below, but it is largely true.  The vast majority of the time a test developer is working his craft, he'll be writing code which will look very similar to the code written by developers.  If the product is a framework or platform, the code a test developer writes is the same code written by all developers utilizing the framework.  The constraints on the code are different.  The level of polish is usually not as high.  The execution path for test code is very constrained and so the code need not be as robust in the rest of the path.  High quality is, however, still very important.  Tests cannot be flaky and often have long lifespans so correct behavior and maintainability are important items.  These are the same sorts of things developers are required to produce in their code.


So why do we need this class of people?  The skillset, while similar, is also different.  Also, it is important to have a role that is set aside just for this task.


Test developers must think differently than developers.  Whereas a developer must worry about the positive cases, a test developer focuses her efforts on the negative cases.  She is more interested in the corner cases and "unexpected" input than on the primary method of operation.  This is even more true today in the world of increasing use of unit tests.  Test developers also must focus on end-to-end and integration testing.  This is often the hardest to automate and requires a different approach than the typical unit tests take.  Developers could write this code, but are not accustomed to doing so.  Having a person who does this all the time will produce better refined solutions.  In short, test developers will have a greater sense of where to look for bugs than someone who does not spend most of their day trying to break things.


There are particular methodologies specific to test development.  Developers will be unfamiliar with these.  I'm thinking of model-based testing, data-driven testing, equivalency classes, pairwise testing, etc.  Being adept at each of these is a skill that requires development.  A person dedicated to test development will learn them.  Someone recruited for only a short time to write some tests will not.


Additionally, there is a benefit to having a dedicated test development corps beyond just the specific skills.  If everyone is thrown into a common bucket, testing tends to get underfunded.  The money is not made directly by testing.  In fact, you can usually skimp on testing for a while and get away with it.  Over time though, a product will quickly degrade if it is not actively testing.  Having a sandboxed team responsible for testing ensures that proper time is spent testing.  No one can steal cycles from the test development team to implement one more feature. 


It's also important to have a group that is not invested in the product the same way that dev is.  A good tester will be proud to hold up the product if it deserves it.  Developers will, subconsciously, be rooting for the product to succeed and may overlook things that are wrong.  They'll also approach the product with assumptions that someone involved in the creation will not.  What is obvious to everyone who wrote the product may not be obvious to anyone else.

Monday, September 17, 2007

Apportion Blame Correctly

This is a follow-on to my post about Managing Mistakes.  There is a particular kind of blame I've seen several times and which I feel is especially unwise.  The situation is this.  A person, we'll call him Fred, is working with another team to get them to do something.  He needs to get a feature into their product.  This feature is important to your team.  Fred does all the right things but the other team drops the ball.  They don't respond.  They slip their deadlines.  They don't include the feature.  The trouble is, while Fred did all the right things, he could have done even more.  He could have called one more time.  He could have camped out in their offices.  He could have paid more close attention to the other team's schedule.  In short, he could have made up for their failure through extra work on his part.  Usually, however, these extra steps are only obvious in hindsight which is, after all, 20/20.

The all-too-typical response of management is to blame Fred for the failure.  After all, he could have avoided the problem if he just did a little more work.  Management is frustrated and embarrassed that the feature didn't get completed as planned.  Fred is in their team.  Fred is thus the easy target.  Fred does deserve some of the blame here.  It's true that he could have done more.  But Fred wasn't delinquent in his actions.  He followed the usual steps.  The other team had agreed to do the work.  If they had done their part, everything would have worked out.  Unless Fred's manager is better than average, he'll be receiving all of the punishment even though he's only partly at fault. 

There is a term in the legal field for this sort of blame.  Joint and Several Liability is when two parties sharing disproportionate amounts of the blame are held equally liable for the full cost.  Wikipedia gives this example:  If Ann is struck by a car driven by Bob, who was served in Charlotte's bar, then both Bob and Charlotte may be held jointly liable for Ann's injuries. The jury determines Ann should be awarded $10 million and that Bob was 90% at fault and Charlotte 10% at fault.  Under joint and several liability, Ann may recover the full damages from either of the defendants. If Ann sued Charlotte alone, Charlotte would have to pay the full $10M despite only being at fault for $1M. 

This strikes most people as unfair.  Charlotte is only responsible for 10% of the damage and should thus pay only 10% of the penalty.  Applying an analogous standard in the workplace is equally unfair.  If Fred is only 10% responsible for the failure, he shouldn't receive all of the penalty associated with it.  He shouldn't be reprimanded as if it is solely his fault.  Unfortunately, Fred is the most proximate target and thus often receives all of the blame.

Instead, this is a good time to apply the Tom Watson Sr. school of management.  Don't blame Fred for the whole problem.  Instead, use it as a learning experience to teach him what he could do better next time.  Doing so creates a grateful employee instead of a discouraged one.  Fred is more likely to stick his neck out again if it isn't cut off the first time.  He's also smart.  He won't let the same mistake happen a second time.

Friday, September 14, 2007

Test Developers Shouldn't Execute Tests

This view puts me outside the mainstream of the testing community but I feel strongly that test teams need to be divided into those that write the tests (test developers) and those that execute them (testers).  Let me be clear up front.  I don't mean that test developers should *never* execute their tests.  Rather, I mean that they should not be responsible for executing them on a regular basis.  It is more efficient for a test team to create a wall between these two responsibilities than to try to merge them together.  There are several reasons why creating two roles is better than one role. 

If you require that test developers spend more than, say, 15% of their time installing machines, executing tests, etc. you'll find a whole class of people that don't want to be test developers.  Right or wrong, there is a perception that development is "cooler" than test development.  Too much non-programming work feeds this perception and increases the likelihood that people will run off to development.  This becomes a problem because in many realms, we've already solved the easy test development problems.  If your developers are writing unit tests, even more of the easy work is already done.  What remains are trying to automate the hard things.  How do you ensure that the right audio was played?  How do you know the right alpha blending took place?  How do you simulate the interaction with hardware or networked devices?  Are there better ways to measure performance and pinpoint what is causing the issues?  These and many more are the problems facing test development today.  It takes a lot of intellectual horsepower and training to solve these problems.  If you can't hire and retain the best talent, you'll be unable to accomplish your goals.

Even if you could get the right people on the team and keep them happy, it would be a bad idea to conflate the two roles.  Writing code requires long periods of uninterrupted concentration.  It takes a long while to load everything into one's head, build a mental map of the problem space, and envision a solution.  Once all of that is done, it takes time to get it all input to the compiler.  If there are interruptions in the middle to reproduce a bug, run a smoke test, or execute tests, the process is reset.  The mind has to page out the mental map.  When a programmer is interrupted for 10 minutes, he doesn't lose 10 minutes of work.  He loses an hour of productivity (numbers may vary but you get the picture).  The more uninterrupted time a programmer has, the more he will be able to accomplish.  Executing tests is usually an interrupt-driven activity and so is intrinsically at odds with the requirements to get programming done.

Next is the problem of time.  Most items can be tested manually in much less time than they take to test automatically.  If someone is not sandboxed to write automation, there is pressure to get the testing done quickly and thus run the tests by hand.  The more time spent executing tests, the less time spent writing automation which leads to the requirement that more time is spent running manual tests which leads to less time...  It's a tailspin which can be hard to get out of.

Finally, we can talk about specialization.  The more specialized people become, the more efficient they are at their tasks.  Asking someone to do many disparate things inevitably means that they are less efficient at any one of them.  Jack of all trades.  Master of none.  This is the world we create when we ask the same individuals to both write tests and execute them repeatedly.  They are not granted the time to become proficient programmers nor do they spend the time to become efficient testers.

The objections to this systems are usually two-fold.  First, this system creates ivory towers for test developers.  No.  It creates a different job description.  Test developers are not "too good" to run tests, it is just more efficient for the organization if they do not.  When circumstances demand it, they should still be called upon for short periods of time to stop coding and get their hands dirty.  Second, this system allows test developers to become disconnected from the state of the product.  That is a viable concern.  The mitigation is to make sure that they still have a stake in what is going on.  Have them triage the automated test failures.  Ensure that there are regular meetings and communications between the test developers and the testers.  Encourage the testers to drive test development and not the other way around.

I know there are a lot of test managers in the world who disagree with what I'm advocating here.  I'd love to hear what you have to say.

Thursday, September 13, 2007

Managing Mistakes

A promising young executive at IBM was involved in a risky venture that lost $10 million for the company.  When Tom Watson Sr., the founder and CEO of IBM, called the executive to his office, the executive tendered his resignation.  Watson is reported to have said, "You can’t be serious. We’ve just spent $10 million dollars educating you!”

If you've ever watched The Apprentice starring Donald Trump, you will have seen a different approach to handling failure.  Every season goes something like this:  The best people step up to lead.  They do well for a while but eventually make a mistake.  Trump berates them for that mistake and they are fired.  By the end, only the weakest players are left because they stayed in the background and didn't make themselves a target.

Mistakes inevitably happen.  As a manager, when they do, you must choose how to react.  You can choose to act like Tom Watson or like Donald Trump.  In my experience, I have seen managers make both choices.  I have seen both sorts of managers be successful.  However, those who emulate Trump do not usually have happy organizations.  Watson's response engenders love.  Trump's fear.  Both are good motivators, but ruling by fear has serious consequences for the organization.  If you want the most from your team, you want them to be motivated by love.  People will simply go further for love than they will for fear.  They'll also stick with you longer.  If you rule by fear, people will only follow as long as they think you can offer them something.

Here is a real world example.  As a test manager, there have been times when we've missed things.  Late in the product cycle, or worse, after we shipped, a significant bug is found.  The scrutiny then begins.  My manager will usually start asking pointed questions.  At this point there are two ways to react.  The first is to get upset.  "How could we have been so stupid that we missed this thing?"  "Only incompetent people wouldn't have done the obvious things which would have led to this being found."  The pressure to take this approach is high.  There is emotion involved.  Something went wrong and our most base instincts say that we must demand payment.  There is also the desire to deflect the blame.  "It's not my fault.  Sam screwed up."  Jumping on Sam about this will scare him into thinking twice before making a mistake again.  There are problems though.  The fear of being blamed for failure will distort team behavior.  People will be more careful, but they'll also be less honest.  They'll deflect the blame.  They'll hide mistakes.  It's possible that the real cause will remain hidden.  If it does, the same mistake will repeat.

The second reaction is the one I strive to take.  My rule of thumb is that if something goes wrong, I don't ask who messed up.  I ask my team to tell me not what went wrong, but rather how we'll avoid this mistake next time.  I don't get mad unless they make the same mistake twice.  Then they aren't learning.  As long as they are learning from their mistakes, I am not upset.  This causes the team to react much differently when things go wrong.  They will be supportive of the efforts to find out what happened.  They will be more open about what led up to the mistake.  They'll also work hard next time not to make mistakes.  Not because they fear being chewed out, but because I've been supportive.  They'll want to do well because they respect me and doing well is what I expect of them.

Wednesday, September 12, 2007

Practice, Practice, Practice Makes Perfect

I was sent a link to this article as a followup to my post about learning to program over a long period of time.  The article isn't about programming but rather about comedian Jerry Seinfeld.  When he was young and working to be a comic, he had a particular technique for learning that applies to the act of learning to program.  Jerry knew that to be a good comedian, he had to practice a lot.  The more he practiced writing comedy, the better he would become.  Similarly, the more you practice programming, the better programmer you'll become.  Jerry employed a technique for reminding him to practice.  He kept a calendar and every day he practiced, he would mark that day off the calendar.  After a while he would have a long streak with no breaks.  Then the incentive to not slack off but instead to continue the streak was high.  This technique can be employed in programming.  While you don't necessarily need to program every day (although that wouldn't hurt), you do need to practice regularly.  Pick a program you want to work on.  Now break that up into small steps each taking a few hours or perhaps a day to accomplish.  After you have the list, it is just a matter of ensuring that every day (or two or some other regular basis), you accomplish one.  Don't just program when you feel like it.  Force yourself to do it regularly.  If you wait until you get the urge, you may have long gaps and your growth will be slow.

Tuesday, September 4, 2007

You Can't Learn To Program In A Hurry

A friend turned me on to this essay from Peter Norvig entitled Teach Yourself Programming in Ten Years.  In it the author attacks the idea of the "Teach Yourself C++ in 21 Days" kind of books.  They make it look easy to learn to program.  Unfortunately, it isn't.  You can't become a good programmer in a month.  In fact, you can't become a good programmer in a year.  You don't need to go to college to learn how to program but it helps.  Learning on your own is a lot of work.  Learning to program is like learning anything of value.  It takes a lot of practice.  The essay quotes research indicating it takes a decade to master any significant skill. 


Norvig gives some good advice on those who want to become programmers.  Most of it, like just get out there and program, is great advice but is also common.  A few though, are less commonly stated and deserve repeating.  Among these are:



  • Work on other people's programs.  Learn a program written by someone else.  This will teach you how other people have solved problems and will teach you how to write maintainable software.

  • Learn a half dozen languages.  Learn languages that have different approaches to programming.  Each new sort of language will open your horizons and help you see problems in different ways.  Solving a problem in C will lead you in different directions than solving the same problem in Smalltalk.  Unless you know both languages, you'll likely fail to see one of the solutions.

  • Understand what's going on below your language.  Modern programming languages are so high level that it's easy to forget that there is a real machine running the code.  However, as Joel Spolsky says, all abstractions leak.  That is, there will always come a time when you have to understand the layer below the abstraction to solve a bug.  If you understand how memory works, how drives work, what makes a processor stall, etc. you'll be better off.  I see this often in those we interview.  They understand a programming language but they don't understand the operating system.  If you don't know the difference between a thread and a process or if you cannot describe how virtual memory works, you'll be at a loss when things go wrong.

There is advice here for many people.  If you are learning to program, the application is obvious.  Not so obvious is that if it is going to take you a long time and a lot of practice, you're going to have to put in work on yoru own time.  You won't learn to be a good programmer without spending your evenings/weekends doing so.  However, there are other things we should take away from this.  If you are a manager and you are trying to grow young programmers or new programmers, you need to give them explicit time and opportunity to program.  No one can just take some classes or read some books and become a programmer.  For those who are experienced programmers, it is a good reminder to read the code of others.  If you work at a big company, you can look at the code of those around you.  Seek out those who are more skilled than yourself and examine how they solve problems.  If you are at a smaller company, seek out open source projects.  See how they do what they do.


Here's a piece of advice that the essay doesn't mention:  rewrite your programs.  Each time you'll have a better understanding of the problem domain and thus you should be able to solve the problem in a more efficient way.  You'll learn how much you've improved when you see your old code and you'll learn to approach the problem in a new way.


Read the whole essay.