Monday, January 29, 2007

Contemplating ViM

A conversation with a colleague got me interested in ViM once again.  I've used ViM on and off for a few years now but never gotten really adept at using it.  For those that don't know, ViM stands for Vi Improved and Vi is one of the two dominant text editors in the *nix world (the other being emacs).  Vi was written by Bill Joy.  It is hard to learn but very powerful once you put in the time.  ViM is the most popular of the modern incarnations of that editor.  I may have to dust off my installation and try to master it once again.  In my newfound interest, I located a few things which I thought I'd pass on.

First off, I figured out a good way to get the 64-bit version of ViM installed on a x64 Vista.  You can install the 32-bit version but tell it to go to the x64 "program files" directory on install.  This will set up all of the shortcuts and shell extensions will later need.  Then you download the x64 build of ViM from here.  Finally, you just unzip the contents of that package over the top of the previously installed ViM.  Done.  So far it seems to work fine.

Next, I found a few useful documents to help me get back into the habit.  The first is this graphical cheat sheet.  It's a bit hard to print as I had to download Inkscape and play with the size a bit.  Once printed though, it is a great way to quickly find the needed keystrokes to accomplish most tasks.  The most frustrating part about using ViM is knowing what you want to do but being unable to locate the right keystrokes.

I also ran across this tutorial called Efficient Editing with ViM that laid out the key ViM instructions better than most.  I've read a lot of ViM tutorials and they tend to be hard to follow.  They probalby make sense if you already understand ViM but they are hard to approach as an outsider.  This one does the job better than most.  Either that or I'm an insider now...

Finally, I've been hearing good things about ViEmu which is a ViM-like editor which works inside Visual Studio.  No longer is one required to give up Intellisense to edit code efficiently.  I haven't tried it yet but I'm about to.

Mac Users Respond To Vista

I have heard several Mac users talking very kindly about our new OS.  This is encouraging as these are a bunch that are pre-inclined to view Vista in a negative light.  Impressing them is a hard task.  It appears that, at least in some cases, we have done that.  Here's a sampling:



  • Many of the commentators on TWiT had good things to say.

  • Andy Ihnatko writes in MacUser about several features he thinks Microsoft got better than Apple.  It's a quite entertaining read too.

  • The Apple Blog also lists 9 features they want Apple to borrow from Microsoft.  This one is more about XP than Vista though.

There are a few themes here.  They seem to be:



  • Voice recognition that works well.

  • Aero Glass including the 3D wind-tab view, the live previews, and the bouncy windows.

  • Media Center in every (premium) build

  • Working search.

  • Sidebar

Of course, there are also a lot of contrarian reviews too.


 While I'm on the subject, here are some of my top reasons to upgrade:



  • User Access Controls & better security.  You can be a non-admin and still get your work done.

  • Search.  Find everything on your system.  Quickly.

  • Better audio UI.  Change volume for each application.  No more annoying midi sounds or competing myspace pages.

  • Aero Glass.  Pretty.

  • 64-bits.  Because I'm a geek.

  • DVD burning in Media Center.  Now a way to archive the shows.

  • Sidebar gadgets.  On a widescreen monitor, set it to always on top.  Very cool.

Friday, January 26, 2007

History of Programming Languages

I saw this on someone's wall a while ago but just now ran across it on the web.  This is a chart of all major programming languages, how they are related, and what years they were around.  It's an interesting view of where we have come from.  Two of the first 4 languages (Fortran, Lisp, B-O, and IAL) are still in use today in one fashion or another.  I find it intriguing to look at the ancestors of many of our modern languages.

Wednesday, January 24, 2007

Recommended Reading for New Test Developers

I've previously written about how to teach yourself to be a test developer.  That post included an extensive reading list.  It assumed that you were a tester and wanted to learn to be a test dev.  What if you are a new CS grad and you just got hired as a test developer?  What should you read to get a leg up on the competition?  What follows is my list of recommendations.  Some of this list assumes you'll be working in C++.  If you are going to be working in C# or Java or some other language, you'll have to make substitutions.

Testing Skills:

I'm not a big advocate of formalized testing skills.  There are those who think there is as much to learn about testing as there is about design and development.  I don't happen to subscribe to that philosophy.  Testing isn't trivial, but I'm not convinced it is rocket science either.

My only recommendation in this area is James Whittaker's How To Break Software.  This book is very practical.  Not a lot of theory but it will give you a good, working knowledge of how to test software.

Programming Skills:

With a CS degree you should have learned the basics of programming.  I liken this to learning a written language.  You know what sentences, verbs, and nouns are.  This makes you capable of writing but doesn't make you a good writer.  School projects tend to be small and throw-away.  You don't often have to deal with large code bases and the quality of the code itself is often not related to the grade achieved.

The Practice of Programming by Kernighan and Pike.  This gives you a lot of tips on how to write good software.  Short and meaty.  A great book.

Refactoring by Martin Fowler.  This book consists of two parts.  The first part explains the concept of refactoring.  How and when to make pre-existing software better.  The second part is a lexicon of refactoring patterns.  These are recommendations to handle specific situations in code.  The latter half of the book I find less useful.  Feel free to skip it.  However, the first half is extremely useful.  Make sure to read this.

Design Patterns by Gamma et al.  Also known as the "Gang of Four" book.  The most enlightening part of the book is the first few chapters.  It shows how to design an editor using an object-oriented approach.  I personally found this to be an "a-ha" moment.  If you didn't cover design patterns in school, read this book.

Design Patterns Explained by Alan Shalloway.  Great explanation of practical use of design patterns.  Read this before you start designing programs.  You'll find yourself writing better code and you'll find maintenance a whole lot easier.

Advanced C++ Skills:

These books will help you go beyond the basics.  Going back to my written language example, consider this to be like gaining a strong vocabulary.  You can communicate your ideas with just basic vocab words but to communicate the nuance, you need a deeper understanding of the language.

Effective C++ by Scott Meyers.  If you feel comfortable with C++, this is a great next book.  It will help you to avoid many of the gotchas in C++.  This book will help you be a better programmer at the syntax level and have a much deeper understanding of the language as a whole.  Skip the More Effective C++ book though.  It's not a bad book but there are better uses for your time.

Inside the C++ Object Model by Stanley Lippman.  Deep explanation of how C++ works under the covers.  If you want to be a coding guru, give this book some of your time.

These are my recommendations.  There are certainly books that I missed.  What recommendations do you have?

Monday, January 22, 2007

How Much Memory Does Vista Need?

With Windows Vista coming soon to a retail channel near you, one of the important questions to ask is, "How much memory does it really need?"  There are the official minimum requirements of 512 MB, but we all know that minimum requirements don't translate to a great experience.  What are the real memory levels that get good performance?  After having used it for several years during the development process, I figure I'm in a pretty good place to help answer that question.  Vista definitely requires more memory than XP did to achieve similar levels of performance.  That is to be expected with all of the new functionality invovled.  To run Vista at its best, I recommend you have at least 1.5 gigabytes of RAM.  In my experience, the following is a mapping from XP RAM to Vista RAM requirements for eqivlaent performance.

XP RAM      Vista RAM

128 MB       512 MB

256 MB       1 GB

512 MB       1.5 GB

1 GB           2 GB

In my experience, XP ran very well with 512 MB and only slightly better with 1 GB (unless you were putting it through a very serious work load).  To get this kind of performance out of Vista, you really want 1.5 GB.  1 GB will work but it will be sluggish at times.  Anything less than 1 GB will feel very slow.

If you have experience with Vista through the business release or a beta, what sort of memory performance levels do you see?  Do you concur with my recommendations?

 

Note:  This is not official guidance.  This is based merely upon my personal observations.  Your mileage may vary.

Thursday, January 18, 2007

Back To School

When I'm not trying to deliver the next great operating system or spending time with my family, I'm working on a masters degree in computer science.  It makes for a busy life.  As I'm doing this on the side, I'm only able to take one class a semester.  Last semester it was Theory of Computation (automata, turing machines, NP, etc.).  This semester it will be computer security.  I look forward to getting back to a class where we might see a compiler.  Theory of Computation is all about math.  Let us just say that isn't my favorite subject.  It will be interesting taking this class.  I was here for Microsoft's security stand-down.  I've been to numerous security-related classes and talks.  I'm not an expert but I do have a working knowledge of the subject.  I will be looking to see how what I'm learning coincides with my practical experience.  This is a class that could be very useful or way too theoretical.  We shall see.

Wednesday, January 17, 2007

The Fat Man Speaks

George Sanger, aka "The Fat Man," is interviewed by The Escapist.  George is the founder of Project BBQ which I mentioned a few weeks back.  He's also a well-known game music designer whose credits include Wing Commander, Loom, and Master of Orion.  In this interview he comments on the current state of game audio (poor) and talks about Project BBQ and his new game-design think-tank, Project Horseshoe.

Tuesday, January 16, 2007

Hiring Great Testers - Tester Roles

In my mind, there are basically three roles on a test team.  These three roles are: developers, scripters, and those who execute the test cases (runtime testers).  In reality there is a spectrum of capabilities on any team but I think most roles will be closely related to one of these three roles.

Test Developers are the heart of a modern test team.  There was a day when you could get away with hiring a few people to just use the product and call that a test team.  This is no longer the case.  Products are becoming more complex.  The lifespan of products is increasing.  More products are being created for developers instead of end users.  These have no UI to interact with so simple exploratory testing is insufficient.  To test complex products, especially over an extended lifespan, the only viable solution is test automation.  When the product is an API instead of a user interface, testing it requires programming. 

Test developers are programmers who happen to work on a test team.  It is their job to write software which executes other software and verifies the results.  The test developers need to be at least as skilled as the developers of the product.  They need to know whatever language their product is written in.  Their job is often as difficult or perhaps even more difficult than writing the product.  Sometimes the job can be done in a simple fashion where each function is just called with various parameters and a simple method of pass/fail is monitored.  This could be a success result or some notification from the program.  A better test developer will not rely on a program to tell him if the right things happened.  Instead, he will monitor the actual behavior.  For a simple function this might just be the return result but more often than not, it requires monitoring the output of the program and making a complex decision based on it.  Test developers are also responsible for creating the test harnesses and systems for their teams.

Scripters are those who have some programming ability but are not as skilled as a test developer.  They will usually know a programming language but on that is less complex than the language the product is written in.  Scripters may know Visual Basic, Perl, Javascript, or even C#.  More important than which language they know, what distinguishes them from a test developer is that their understanding of programming is less advanced.  They will understand the language syntax but their understanding of algorithms and data structures is likely less substantial. 

Scripters will often play a role where they spend a lot of time living the product.  They will use their programming skills to script the product (if it has such a feature) or to utilize tools to drive the product.  The tools could be UI toolkits like Visual Test or something written by test developers.  Because scripters have an understanding of programming they will be able to have a deep understanding of the product.  They will be able to quickly determine the root cause of a failure and see relationships between seemingly unrelated bugs.  Scripters play an important role on any team.  They often handle the bulk of the test triage efforts and closely monitor the needs of the users of the product.

In some teams the responsibility for setting up machines, installing builds, and executing test cases falls to a group I'll call Runtime Testers.  This role involves no programming.  Runtime testers usually don't have any significant programming knowledge and if they do, they don't often get a chance to utilize it.  They play a fundamental role, however.  Because they are the ones running all of the tests, they often have the best knowledge of the state of the product.  If you want to know what really does and does not work, ask one an execution specialist. 

Sometimes this role is relegated to merely clicking buttons.  They run an automated test and log some errors.  This is a waste.  They should be tasked with understanding the product and playing with it.  As I've stated many times in the past, there is a critical need for manual testing on most project.  These are the people best equipped to do that.  To get maximum value out of runtime testers, they should be encouraged to live the product.  To become experts in the functionality the way a customer might use it.  In doing so they can become advocates for the customer and find the usability issues and corner cases that automation cannnot easily find.

Each of these three roles is critical on a test team.  If you build a team with only one or two of the roles, you'll be missing something fundamental.  Each project will require a different mix of people.  Some need more test developers, others more runtime testers.  The importance of these roles comes into play not only when you task your team but also when you hire.  If someone is to be a runtime tester, hiring a CS grad may not be the best idea.  They will likely be bored and quickly move on.  Hiring a self-educated computer geek may be a better fit for the role.  Likewise, if you need test developers, hiring someone who only knows perl is probably not a good fit.  Instead, hire someone who is truly a developer.  My rule of thumb is that if they couldn't cut it as a dev, don't hire them as a test dev.

Monday, January 15, 2007

Hiring Great Testers - Series Index

I'm going to be doing a series not on testing but on the people that carry it out.  This will be a post on the subject of the testers themselves.  I plan to describe what the various roles in testing are, what to look for in a good tester, what sort of questions to ask, etc.  This post will contain links to each post in the series.  Also, this is a great place to post any suggestions for topics you might want me to cover.

To help whet your appetite, here are links to a few posts from the past few years on the subject of testers:

Hiring Great Testers Index:

Saturday, January 13, 2007

Duplication of Effort Is Good?

I was in a meeting the other day deciding what to do next in our testing efforts.  Several times during the meeting someone made a suggestion which was countered by a statement something like this: "Shouldn't we let the centralized team handle this?" or "Could we write this in a more generic fashion so everyone can benefit from it?"  In general these are good questions to ask.  Every time though something inside me reacted negatively.  I didn't want to go the centralized route.  This post is an examination of why I felt that way and why, perhaps, it is not always a good idea to do things centrally.  Why instead it might be a good idea to duplicate effort.

The sort of issues we were talking about revolve around test harnesses and test systems.  For instance, we might want some new way of generating reports from our shared test result database or some new way of monitoring performance issues.  Each of these can be solved in a specific way for a problem area or can be solved more generally and serve many problem areas.  Doesn't it always make sense to solve it more generally? 

It sure seems so on face.  After all, if Team A and Team B both have similar problems and both put N units of effort into a solution, we've spent 2N units of manpower when we might have only spent N and used the solution 2 times.  That other N units of manpower could be better spent working on a solution to a different problem.  Is it really that simple?

As programmers we are trained from early on not re-write code.  If we need to do something two times, we are trained to write it once and then call it from both places.  When it comes to code, this is good advice.  However, when it comes to tools, the same rules may not apply.  Why not?  Because when it comes to tools, the jobs you are trying to do are more often than not, non-identical.  Even if 80% of the jobs are the same, that last 20% of each makes a big difference. 

Writing a tool to do the same job twice is always better than writing 2 different tools.  But what if the jobs are dissimilar?  In that case, the answer is not necessarily clear.  Instead we need to weigh the issue carefully before making a decision.  The advantages of doing things once are obvious:  it saves us from having to duplicate effort.  There are, however, disadvantages that many times overwhelm that advantage.  That is, the cost of doing things one time is higher than the cost of doing it twice.  Why might that be?  There are three things which cut against the time savings of doing things one time.  First is increased complexity of the task.  Second is the decreased quality of the mapping between the problem and the solution.  Third is the lack of responsiveness that often accompanies a shared project.

Solving two identical problems with one solution is easy.  It takes N time.  However, solving two similar but not identical problems takes greater than N time.  That's easy enough to accept but it should still take less than 2N time.  Right?  Assume two problems are 80% similar.  That means they are 20% different.  It might seem that it might take 1.2N time to solve both problems.  Unfortunately, that isn't the case.  Making one framework do two things is harder than just doing two things independently.  I would argue it is up to M^2 as hard (where M is that delta of 20%).  Thus instead of taking 200 one percent units of time to do the duplicate work, it might take 480*.  Why this polynomial explosion in time?  Because trying to get one interface to do one thing is easy but trying to get it to do two is much harder.  You have to take into account all of the tradeoffs, mismatches, and variations required to handle each situation.  If you have M functions (or variables) and each needs to take into acount M other functions, you have M*M or M^2 possibilities to deal with.  For instance, if we are calculating a FFT to compare audio samples and one problem expects the values to be in float32 and the other in int32, trying to make one interface that handles both situations is very hard.  The same thing happens when you try to create a test system that handles everyone's problems.  It becomes inordinately complex.  Writing it becomes a painstaking process and takes longer than solving the problem twice.

There is a saying "jack of all trades, master of none."  This applies to software just as it applies to professions.  Just as the best carpenter is probably not also the best plumber, the best solution to problem A is probably not the best solution to problem B.  There are always tradeoffs to be made in trying to solve two problems.  Something inevitably becomes harder to solve.  I have to do more initialization or I have to transform my data into a universal format or I have to fill in fields that make no sense for my cases.  Most of these are small annoyances but they add up quickly.  In Windows we have a unified test system for almost all of our testing.  During the Vista timeframe, this was brought online and everyone moved to it.  Changing systems is always painful but even after the cost of switching was paid, many (most?) people still felt that the new solution was inferior to whatever they had been using before.  Was the new system that bad?  Yes and no.  It wasn't a terrible system but the incremental cost of a less specialized tool added up.  Benji Smith has an essay I think is applicable here.  He is talking about web frameworks but uses the concept of hammers.  There are many different kinds of hammers depending on whether you want to tap in a brad, pull a nail, pound a roofing nail, or hammer a chisel.  A unified hammer doesn't make sense and neither does a framework for creating specialized hammers.  Sometimes having 3 hammers in your drawer is the right answer.

Finally, there is the problem of responsiveness.  Shared solutions are either written primarily by one team and used by a second or they are written by a specialized central team and used by everyone else.  Both of these have problems.  If my team is writing a tool for my team's problem, we're going to get it right.  If we need certain functionality in the tool, it will be there.  However, if someone else is writing the tool, we may not be so fortunate.  One of two things is likely to happen.  Either the other team will have different priorities and so our feature will be written but at a time not to our liking or worse, our feature will be cut.  It could be cut due to time pressures or because it was decided that it conflicted with other scenarios and both couldn't be solved simultaneously.  Either way, my team now has a big problem.  We can work around the tool which is always more expensive than writing it in the first place.  Or we can forego the use of the feature which means our functionality is limited.

When one adds up the explosion in complexity, the lack of clean mapping between the problem space and the solution, and the inevitable lack of responsiveness, it can be much more expensive to go with the "cheaper" solution.  Here is a rule of thumb:  if you set out to solve everyone's problem, you end up solving no one's well.

Now that I think about it, these same principles apply to code.  If you are doing the exact same thing, factor it out.  If you are doing similar things, think about it carefully.  Writing generic code is much harder than writing specific code in two places.  Much harder.  A while ago I read an essay (which I can't find now) about writing frameworks.  Basically it said that you shouldn't.  Write a specific solution.  Then write another specific solution.  Only when you have a really good sense of the problem spaces can you even hope to produce a decent framework.  The same goes for testing systems or other general tools.  Specialization can be good.

All too often we make a simplistic calculation that writing something twice is wasteful and don't bother to think about the extended costs of that decision.  Many times writing one solution is still the correct answer but many other times it is now.  In the zeal to save effort, let us be careful we don't inadvertantly increase it.  The law of unintended consequences operates in software at least as well as it does in public policy. 

* 80 + 20^2

Monday, January 8, 2007

Letting Test Drive the Process

When I read the term "test driven development" I usually think of the process of developers writing the unit tests before writing the code.  Richard Collins, a Development Strategist, uses it to mean something different.  He advocates that test be heavily involved from the beginning.  Instead of relegating test to the end of the process to pick up the pieces, involve them from the very start.  He gives the following advice:



  1. Involve test during design.  Have them help make the product testable from the outset.

  2. Release to test often.  This is just releasing regular builds and having test look at them every day.

  3. Maintain a 1:1 test:dev ratio.

  4. Build testability into the product.  Factor it so that each part is testable before the whole is finished.

Basically he is saying make test a first-order member of the team, design in testing, and test all the time.  My thinking on the subject tends to agree with Richard's.  Involving test early and as an equal partner is critical to modern software.  As complexity increases, seat-of-the-pants testing (often done by developers) becomes harder and harder to get away with.


To varying degrees, this is how Microsoft treats test.  We hire skilled test developers who can program instead of QA engineers who might understand some Perl.  We get test involved early.  In fact, we involve them not only to help look at the plans for testability but also for overall design.  As the representatives for end-users, they often have a very good eye for the little things that will make a design that much better.  Also, at Microsoft, testing begins from day one.  Every product I've ever been involved with at Microsoft has had daily builds from very early on.  Every product has also had what we call BVTs (build verfication tests) that are run every day right after the build completes.  If any of their tests fail, the product is held until they can be fixed.

Saturday, January 6, 2007

Printer Problems on Vista x64

I've decided to take the plunge and I'm running Vista x64 on my primary home system.  So far things are going well.  I haven't found any x86 programs that don't run yet.  I'm sure they are out there but I haven't run across them yet.  The driver signing thing has bitten me a few times though like with rivatuner.  One thing I have noticed is that most programs--even open source programs--don't have an x64 build yet.  If they do, it is usually out of date and unsupported.  Both Thunderbird and Vim fall into that category.  It's unfortunate really.  There's no excuse not to have a 64-bit build.  Perhaps with Vista being more readily available as a 64-bit app some of this will change.  Enough of that though.  On to the reason for this post.


My wife's printer appears to have died when I toggled a circuit breaker to install a new light.  To get her running while we try to troubleshoot and perhaps get a new printer, I attempted to attach her to the Brother 5150D laser printer attached to my Vista machine.  Sharing in Vista is easy and I could see the printer from her Windows XP machine.  However, when I tried to install it, there were drivers needed.  That makes sense as the drivers I installed on the Vista box were x64 drivers.  No problem.  I figured I would just install the drivers for x86 locally and be done with it.  It's never that easy.  I downloaded the drivers but the system wouldn't recognize them.


Next I decided to just install the drivers on my Vista box so that her machine would find them automatically.  I downloaded the right drivers there, went to the printer sharing, selected sharing and additional drivers.  I selected x86, clicked ok and was prompted for the drivers.  When I pointed it at the drivers, I got the following error:  "The specified location does not contain the driver Brother HL-5150D for the requested processor architecture."  This had me stumped.  It was the right driver and the right architecture.  I did some searches on Brothers' site and on live.com to no avail.  Eventually I ran across a forum post that pointed me at a post on another forum which eventually had enough information for me to discern the solution.


The problem appears to be a mismatch in the name of the device.  When I looked at the x86 inf, I saw the following (edited to remove extraneous info):



[DriverName]
"Brother HL-5150D series"  = BH5150D.PPD,USBPRINT\BROTHERHL-5150D_SERIF199,LPTENUM\BROTHERHL-5150D_SERIF199,BROTHERHL-5150D_SERIF199


Notice that the system is saying it cannot find the "Brother HL-5150D" yet the inf is for a device called "Brother HL-5150D series".  Firing up Vim, I changed the entry as follows:



[DriverName]
"Brother HL-5150D"  = BH5150D.PPD,USBPRINT\BROTHERHL-5150D_SERIF199,LPTENUM\BROTHERHL-5150D_SERIF199,BROTHERHL-5150D_SERIF199


Problem solved.  The x86 drivers are recognized by Vista and installing the shared printer on XP now works perfectly.


The next question though is where did the name come from.  I found some x64 drivers I downloaded to my machine in the temp folder (why they insist on putting drivers there I have no idea.  I'm sure it makes it easy for mom to find them when installing right?).  These have the word series in them.  Strange.  After looking in setupapi.dev.log, I found the file prnbr001.inf.  It contained these lines:



[Brother.NTamd64]


"Brother HL-5150D"                                            = MS_BRH5150U.GPD,BROTHERHL-5150D_SERIF199,Brother_HL-5150D_USB,Brother_HL-5150D ; Hardware ID


Looks like I found my culprit.  Vista ships with drivers for the printer but the name of the device in those drivers does not match the name of the device in the driver Brother has on their web site.


Hopefully this will get indexed by the servers in the cloud and help someone out there solve their problem.


There is one mystery left though.  For some reason findstr can't seem to find 5150 inside the prnbr001.inf file.  This command: "findstr /in 5150 prnbr001.inf" won't find it.  If anyone knows why that might be the case, let me know.

I'll save you some typing...

If you are one of those who read this blog directly in your browser and not through an RSS aggregator, I have added a new URL which can save you some typing.  http://steverowe.net will bring you to http://blogs.msdn.com/steverowe/ now.  I have had the domain name for a while but wasn't using it for anything.  This is as good a use as any.

Wednesday, January 3, 2007

Interview With Office Test Director

Tara Roth, the newly annointed director of test for Office, was recently interviewed by Channel 9.  She talks about her role, what drew her to test, the attributes of good testers, the testing product cycle, etc.  It is an informative interview about how test works in the release cycle and good insight into how Microsoft approaches testing our products.  If you have 1/2 hour, give it a view.

Tuesday, January 2, 2007

Project BBQ Reports Are Available

In October of last year I flew to Texas to attend what is called Project Bar-B-Q.  It is promoted as "The premier interactive music think tank" and that pretty much sums it up.  It is a gathering of 50 people from the computer audio and music industries to talk about the future of computer-oriented audio.  Some of the topics this year covered audio for games, the quality of audio coming from PCs, standards for software/hardware interfaces, DRM, etc.  I found it a very useful conference.  I had an opportunity to meet a lot of people in the industry and learn about a lot of exciting trends.  If you are involved in PC audio or music, consider attending.  It's well worth your time.

The reports from the 2006 are finally available.  I was a participant in the working group and editor of the report on audio system configuration.  My particular group had a wide variety of participants from companies involved in codecs, pc systems, software (me), and microphones.  We discussed what it would take to make configuration of audio simple for a variety of scenarios including home theater and laptop communications.  You can find our report here.

There are some photos of the conference available here.  I'm the one in the middle of this picture with the blue jeans and dark green jacket.

Welcome to 2007

Just a quick note to say welcome back to work.  I'm sure, like I did, that many of you had some time off at the end of the year but that time is over (or will be soon).  I hope that you all had a good year in 2006 and that 2007 treats you at least as well.  I've finally found some time to blog more than once a month.  My resolution is to keep that up.  :)  During the coming month or so I plan to post a series on hiring testers including descriptions of the different sorts of testing positions, some tips for interviewing, what to look for, etc.  If you have other topics you'd like to see me cover in the area of testing, development, audio, or video, please leave a comment and let me know.