Monday, February 27, 2006

HD-DVD Fighting Back

A few months ago, HD-DVD was written off by many.  There were no exclusive studios and most vendors seemed to have joined the Blu-Ray camp.  Then something changed.  HP decided it would take a more neutral stance.  Microsoft and Intel came out strongly backing HD-DVD.  The PS3 is going to be late and expensive.  Today, Blu-Ray still may have the edge, but it's lead is narrow and the momentum is going the wrong way.  It is losing that sense of inevitability it once had. 


The New York Times published an article explaining some of this momentum shift.  It also has some quotes from our VP, Amir Majidimehr.


Some snippets:


The possible delay and the Blu-ray group's loss of its once-commanding lead are not encouraging developments for Sony in its attempt to revive its electronics group after a series of bungles.


Toshiba will sell two players starting in March; one will cost just $499, half the price of the cheapest Blu-ray machines, the first of which will hit the stores this spring. Samsung's first machine will cost $1,000, while Pioneer's Blu-ray player will run $1,800.


"The pendulum is swinging back to the HD-DVD camp," said John Freeman, who runs a technology research firm, Strategic Marketing Decisions, which last year declared Blu-ray the front-runner. "It will be interesting to see if the Blu-ray group can recover. It's only a matter of time before people start backing out of the Blu-ray camp."


 

Wednesday, February 22, 2006

February Vista CTP Is Out

I won't be the first to mention it but I thought I would add my $0.02 about it.  The February CTP release of Vista, build 5308 is now out.  It should be available to you if you are a registered beta user.  If you aren't, there are lots of reviews and lots of pictures out there. 


This new release marks a major milestone as it is now feature complete.  What you see now won't change the way it looks a whole lot for the final release.  From here on out, the emphasis changes from getting features in to solidifying the project.  We'll spend the next n months fixing bugs and improving performance.  You can see some of that already even in this build.  I am running it on my main desktop at work and find it substantially more responsive than previous versions.  Back in October I talked about how a new OS usually got worse before it got better.  We are now at that tipping point.  It is looking better and will continue to improve between here and the launch.


Here are a few links to give you perspective on the new build:


PC Magazine has about 55 screen shots available.


PC World talks about the new gadgets.


CNet also has a review with pictures.

Thursday, February 16, 2006

Dependent Test Cases

As most of my blog posts do, this one stems from a conversation I had recently.  The conversation revolved around whether all test cases should be independent or if it was acceptable to have one rely upon another.  It is my contention that not only are dependent test cases acceptable, but that they are desirable in many circumstances.


            There are two basic ways to structure test cases.  The most common sort is what I will term “Independent.”  These test cases are self-contained.  Each test case does required all setup, testing, and cleanup.  If we were testing the Video Mixing Renderer (VMR), an independent test case would create the playback graph, configure the VMR, stream some video, verify that the right video had been played, and tear down the graph.  The next test case would repeat each step, but configure the VMR differently.


            The second sort of test case is what I will call a “Dependent” test case.  This test case carries out only those actions required to actually test the API.  All other work to set up the state of the system is done either by the test harness or by a previous test case.  For an example, assume we are testing the DirectShow DVD Navigator.  The test harness might create the graph.  Test case 1 might start playback.  Test case 2 might navigate to the right chapter and title.  Test case 3 might then run a test that relies on the content in that location.  When all is done, the harness tears down the graph and cleans up.  Test case 3 relies upon the harness and the test cases before it.  It cannot be run without them.


            Some will argue that all test cases should be independent.  At first glance, this makes a lot of sense.  You can run them in any order.  They can be distributed across many machines.  You never have to worry about one test case interfering with the next.  Why would you ever want to take on the baggage of dependent cases?


            There are at least two circumstances where dependent test cases are preferable.  They can be used to create scriptable tests and they can be much more efficient.


            Most test harnesses allow the user to specify a list of test cases to run.  This often takes the form of a text or an xml file.  Some of the better harnesses even allow a user to specify this list via the UI.  Assuming that the list is executed in the specified order (not true of some harnesses), well-factored test cases can be combined to create new tests.  The hard work of programming test cases can be leveraged easily into new tests with a mere text editor or a few clicks in a UI.


Independent test cases are not capable of being used this way.  Because they contain setup, test, and cleanup, the order they are run in irrelevant.  This can be an advantage in some circumstances, but it also means that once you are done coding, you are done gaining benefit from the work.  You cannot leverage that work into further coverage without returning to the code/compile cycle which is much more expensive than merely adding test cases to a text file.


Let’s return to the DVD example.  If test cases are written to jump to different titles and chapters, to select different buttons, to play for different times, etc., they can be strung together to create a nearly infinite matrix of tests.  Just using the test harness, one can define a series of test cases to test different DVDs or to explore various areas of any given DVD.  I created a system like this and we were able to create repro cases or regression cases without any programming.  This allowed us to quickly respond to issues and spend our energy adding new cases elsewhere.  If the DVD tests were written as independent test cases, we would have had to write each repro or regression case in C++ which would take substantially longer.  Additionally, because the scripts could be created in the test harness, even testers without C++ skills could write new tests.


Dependent test cases can also be more efficient.  When testing a large system like the Windows Vista operating system, time is of the essence.  If you want to release a build every day and to do so early enough for people to test it, you need BVTs (build verification tests) that complete in a timely manner.  If the time for setup and cleanup is substantial, doing it for each test case will add up.  In this case, doing it only once for each test run saves that time.


Dependent test cases work best when the system under test is a state machine.  In that instance, setup becomes more complex and factoring good test cases becomes easier. 


Dependent test cases are not the answer to all questions.  They probably aren’t even the answer to most questions.  A majority of the time, independent test cases are best.  Their ability to be scheduled without reliance upon other cases makes them more flexible.  However, dependent test cases are an important technique to have in your toolbox.  In some circumstances, they can be a substantially better solution.

Wednesday, February 15, 2006

Effective Text Editing

I ran across this article I thought I'd share:  Seven habits of effective text editing.  It's written by the author of VIM (Vi Improved) but the techniques apply to all good text editors.  The techiques are:


Move Around Quickly - Use the editor to jump quickly to your destination.  This involves not only search but also backet matching, jumping to symbols, etc.


Don't Type It Twice - Search and Replace, Completion, Macros, etc.


Fix It When It's Wrong - Syntax highlighting


A File Seldom Comes Alone - An editor should support quickly moving between files and viewing more than one file at a time.


Let's Work Together - Can you integrate your editor with other programs? 


Text Is Structured - Can your editor integrate with your build system?  Does it recognize errors? 


Make It A Habit - Spend some time learning your editor.  If you never read the docs, you'll never find where the editor can help you save time.


If you are using an editor which doesn't allow for these techniques, you might consider switching.  A friend turned me onto VIM some time back and I've started to become proficient with it.  The learning curve is a little steep but the payoff is big.  It's also really lightweight which is nice in today's era of heavy-footprint editors.  Right now I have VIM (well, GVIM), MS Word, and Visual Studio.Net 2003 running.  Their memory usage in megs is 5, 23, and 18.  By comparison, Notepad appears to take about 3 megs.

Wednesday, February 8, 2006

Catching the Podcast Bug

With the new year, I decided to take advantage of my health club benefit.  For the past couple of weeks, I've been going to the gym several times a week.  It's been a few years since I last made a go of it.  So far, I'm being more successful.  Part of that success has been my concurrent discovery of podcasting.  Listening to podcasts makes running in place much more interesting.


I have a Creative Zen Nano Plus.  This is a nice little player.  It is small, light, and is PlaysForSure certified so I can take purchased content with me.  My one complaint about the player is that it is not as solidly built as I would like.  It is all plastic and feels like it could easily break.


To date, I haven't started using podcasting software.  At present, I'm just going to the various pages and downloading directly.  This usually works but a few sites only have rss feeds.  I've tried Juice and Doppler.  Juice insists on launching iTunes.  Thanks, but no.  Doppler is pretty cool as it automatically places the files into Windows Media Player.  I will have to give it a second look.


Here are the podcasts I've found interesting so far:



  • This Week In Tech - Leo Laporte, John Dvorak and others talk about the tech news of the week and interview technology guests.

  • QA Podcast - Short podcast about software testing.  I've only listened to a couple so far but find it interesting.

  • NerdTV - Think of it as Charlie Rose for geeks.  Bob Cringely interviews big-name guests like Doug Engelbart, Dan Bricklin, Dave Winer, etc.

  • The Signal - Everything Firefly.

  • Firefly Talk - Everythhing else Firefly.

  • Engadget - News from the world of gadgets.  Wanna know about the newest cell phones?  Try this.

  • The Dice Tower - The latest news from the world of boardgaming.

  • Major Nelson - An insider on the XBox Live team talks gaming news and interviews the people creating the XBox 360.

What podcasts have you found that I should check out?  Are there any good programming/software development podcasts out there?

Tuesday, February 7, 2006

Friendly Reminder: Utilize Source Control

I just finished talking to someone who lost 1 1/2 weeks of work due to an inadvertent key stroke.  It's not only hard drive failure that may get you.  With hardware stability quite high these days, we sometimes feel invincible.  It is easy to get complacent and forget that we are fallible.  All it takes is some bad luck though, and everything is gone.  Source control is your friend.  Use it.  Check in your changes regularly.  If you don't want to commit changes to an active tree, make a working branch and check into that.  Checking in regularly also helps if you ever decide that you don't like the changes you just made.  If you keep the work locally, you're stuck if you ever decide that invasive refactoring just isn't worth the trouble.


Steve's rule of thumb:  Check in at least once a day before you go home.  I tend to check in every time I have a clean compile/test cycle.

Wednesday, February 1, 2006

Lisp and Ruby

I recently came across Paul Graham's essay, Beating the Averages where he talks glowingly about Lisp.  It piqued my interest in this language.  Another language I've been toying with learning is Ruby.  Lisp is compiled (at least it can be) and Ruby is purely interpreted.  Lisp is reportedly very powerful but the community is small.  Ruby is popular but has a lot of rough edges.  Both have many interesting language elements.  Which one to learn?  While reading some articles on Lisp, I ran across two really good articles comparing the two.  I thought I'd share them with you:


Power vs. Popularity


Why Ruby is an Acceptable Lisp


Make sure to read the comments.  Most of the meat is in there.


If you have thoughts about which language I should pursue, leave them in the comments.


2/7 Update:  See http://del.icio.us/tag/ruby+lisp for a list of articles on this subject.