I’ve been doing a lot of interviewing lately, especially of college students. There is one tendency I see a that really separates those that are good from those who still have more learning to do. This is the tendency of the good programmers to see elegant solutions to problems and the corollary that less skilled programmers solve every problem by adding more complexity. Stated another way, the thing that separates the best programmers from the rest is what happens when they run into a serious issue. In my observation, the best coders step back and look for a more elegant solution. The less skilled coders assume that their approach is correct and that another piece of state or another special case is the best choice.
Here is an example. The problem has been changed to protect the innocent. Often times I ask a question similar to the change-making problem. That is, write a program to enumerate all of the ways to make change for a dollar. A typical approach might look something like this. Note, I didn’t actually compile this code so there could be typos. If there are, I’m sure you’ll let me know.
void MakeChange()
{
int moneyLeft = 100;
for (int quarters = 0; quarters <= 4; quarters++)
{
if (quarters) moneyLeft –= 25;
for (int dimes = 0; dimes <= 10; dimes++)
{
if (dimes) moneyLeft –=10;
for (int nickles = 0; nickles <=20; nickles++)
{
(if nickles) moneyLeft –=5;
for (int pennies = 0; pennies <= 100; pennies++)
{
if (pennies) moneyLeft—;
if (0 == moneyLeft) print…;
}
}
}
}
}
I know what you are thinking, “That’s not the right way to solve this.” And you would be correct. However, I have seen a lot of people give basically this solution. Their failure to solve it correctly the first time isn’t the point of this post. Rather, it is their response to the problem. If you haven’t spotted it yet, this will only work correctly for the first time down for loops. After we get to zero, we never gain the money back from the pennies we spent during the last nickles iteration. When I point this out, the solution is too often not to step back and re-examine the problem. “Is there something wrong with this solution?” Rather the typical reaction is to assume that the solution is mostly right and to tweak it. One might think of this as a specific case of not asking the 5-Why’s. The initial reaction is often just to reset moneyLeft at the top of the quarters loop. When that doesn’t work, more variables are added. The result solution looks something like this:
void MakeChange()
{
int moneyLeft = 100;
int moneyLeftQuarters = 100;
int moneyLeftDimes = 100;
int moneyLeftNickles = 100;
for (int quarters = 0; quarters <= 4; quarters++)
{
moneyLeft = moneyLeftQuarters;
if (quarters) moneyLeft –= 25;
moneyLeftQuarters = moneyLeft;
for (int dimes = 0; dimes <= 10; dimes++)
{
moneyLeft = moneyLeftDimes
if(dimes) moneyLeft –=10;
moneyLeftDimes = moneyLeft;
for (int nickles = 0; nickles <=20; nickles++)
{
moneyLeft = moneyLeftNickles;
if (nickles) moneyLeft –=5;
moneyLeftNickles = moneyLeft;
for (int pennies = 0; pennies <= 100; pennies++)
{
moneyLeft—;
if (0 == moneyLeft) print…;
}
}
}
}
}
Not exactly an elegant solution and not one that is easy to get right. There are a lot of subtle cases to think through. Unfortunately, code like this, or code trying to be like this shows up on my white board too often. In a simple problem such as this, it is possible to keep all of the cases in your head and get it right. When the problem becomes larger, however, this is no longer the case. The programmer with the above solution will fail. Thus the solution above is not an acceptable answer even though is technically solves the problem.
If one takes a few moments to step back and re-examine the problem, it is easy enough to see that one doesn’t need to track the amount of money left. It can be trivially calculated when necessary. This is just a specific case of the principle that one should never keep state that can be calculated. Tracking such state provides no benefit and offers the possibility that it will differ from the real state. The solution might look like this:
void BetterMakeChange()
{
for (int quarters = 0; quarters <= 4; quarters++)
{
for (int dimes = 0; dimes <= 10; dimes++)
{
for (int nickles = 0; nickles <=20; nickles++)
{
for (int pennies = 0; pennies <= 100; pennies++)
{
if (100 == (quarters*25 + dimes*10 + nickles*5 + pennies)) print…;
}
}
}
}
}
Much more elegant. Fewer state variables and thus less to get wrong. All of this stems from the idea that one need not track state. There is no reason to keep a running total of the money. It’s all readily available at any moment. It is this key notion that one needs in order to come up with a much improved algorithm. As long as the programmer doesn’t step back and question the need for tracking how much money has been used/how much is left, they will be stuck adding complexity on top of complexity. This is a prescription for failure. This is not an isolated case. Now that I have noticed this tendency, I can often spot it in interviews or even in code reviews. The moral of the story: always look for the elegant solution first. Can the problem be solved by eliminating something or by looking at the problem differently? Only once you have eliminated these possibilities should you add more state. Adding state isn’t always the wrong solution, but it can be a crutch to avoid deeper thinking.
A few notes:
The initial paragraph isn’t quiet accurate. The best programmers often see the elegant solution up front and get themselves into such trouble much less often.
The final solution is not optimal. I know this. Optimizing it would not benefit the example.
Yikes! Using an exhaustive search to make change? I guess it depends on your definition of elegance, but as a managerial type with a background in high performance computing I always push my developers toward the solution with the minimum order of operations. In fact, as an interview question I might even quiz the applicant about why it might be better to solve this without any loops at all.
ReplyDeleteYikes! Using an exhaustive search to make change? I guess it depends on your definition of elegance, but as a managerial type with a background in high performance computing I always push my developers toward the solution with the minimum order of operations. In fact, as an interview question I might even quiz the applicant about why it might be better to solve this without any loops at all.
ReplyDeletePersonally, I've had limited success applying rules of thumb like "step back and look for a more elegant solution." It's about as effective as cajoling developers to "Be More Innovative" or "Don't add bugs." We know these things, but they don't tell us how to do these things.
ReplyDeleteAdding better tools to ones toolbox pays better dividends: Gather data through repeatable experiments. Divide and Conquer. Identify a simpler version of the problem and solve it first.
I'm surprised at the pooh-poohing of state. There are often times when keeping state around makes a problem much simpler to solve, especially for table-driven methods. In all my algorithmic training, I can't remember an instance where having four levels of deeply-nested for-loops was part of an elegant solution. Even if performance isn't an essential requirement for the code, the code seems awfully un-reusable. What if we wanted to make change for something other than $1.00? All those hard-coded (and uncommented) magic numbers would have to be re-calculated in some programmer's head.
I do agree that watching great programmers solve problems is a joy.
> What if we wanted to make change for something other than $1.00?
ReplyDeleteWell said... and what if you want to make change in a country that uses a 0.20 piece rather than a 0.25 piece? And what if you want to allow half-dollars?
Sometimes solving the general question is easier than solving the easier version:
use strict;
#
# first argument is a prefix
#
# second argument is the amount of money to change
#
# the rest of the arguments are the eligible coins
# these must be unique, > 0, integral, and in descending order
#
# no return; just prints out the available combinations
sub change($$@);
change("t", 100, 25, 10, 5, 1);
sub change($$@) {
my $prefix = shift;
my $total = shift;
my @coins = @_;
if (@coins == 0) {
# if the total came to 0 we have a solution
print $prefix, "n" unless $total;
return;
}
my $coin = shift(@coins);
my $most = int($total / $coin);
for my $count (0 .. $most) {
change(
$prefix . ($count ? " $count * $coin" : ""),
$total - $count * $coin,
@coins
);
}
}
Some free complexity analysis: the stateless example makes the "is the total accurate" comparison (4 + 1) * (10 + 1) * (20 + 1) * (100 + 1) = 116,655 times.
ReplyDeleteBy comparison, the stateful version makes the comparison 6,962 times (I calculated this by incrementing a global every time the comparison was executed. I an not aware of s a way to calculate this elegantly.)
As I said at the bottom of the post, this is meant to demonstrate a way of solving problems that is often indicative of a broader pattern. Don't get hung up on the specifics of the implementation. The fact that there are hard coded constants is because it makes reading simple for this example. The penny loop isn't optimized away either and it doesn't abort when the total is too high mid-loop. This is to demonstrate a point, it's not how I would advocate writing such a program.
ReplyDelete@Mike, how do you enumerate all ways of making change without a loop?
In BetterMakeChange(), the if check only evaluates to true on the first pass through, when the change for a dollar is 0 quarters, 0 dimes, 0 nickels, and 0 pennies.
ReplyDelete@Rick, the equality test in BetterMakeChange() should be with 100:
ReplyDeleteif (100 == (quarters*25 + dimes*10 + nickles*5 + pennies)) print…;
@Chad/Rick. Good catch. I'll change it.
ReplyDeleteA bit OT, and it's probably unreadable, but it came to me as I read your post. It loops a total of 557 times, but also uses recursion. Perhaps it will stimulate discussion :).
ReplyDelete>(defun make-change (&optional (amount 100) (coins (list 25 10 5 1)))
> (when coins
> (if (cdr coins)
> (iter (for i from 0)
> (for remains downfrom amount by (car coins))
> (while (>= remains 0))
> (appending (mapcar (curry #'cons i)
> (make-change remains (cdr coins)))))
> (if (zerop (mod amount (car coins)))
> (list (list (/ amount (car coins))))))))
To me, factors that would impact on how well someone goes on a task like this would be:
ReplyDelete* problem solving skills
* level of nervousness
The interviewee that has well developed problem solving skills may be helped secondarily by having the confidence to sit back and think about it, and conversely, a nervous interviewee probably won't have that confidence.
I've noticed that "sitting back and thinking about it" can't be done (well) in front of the computer, or at least, not while using the computer, because it seems to bring the thought levels down to details and pixels, not concepts and possibilities.
Also, I came across some brain fitness research (can't recall the link, sorry) which relates to how well they deal with the solution being wrong. Basically, when someone is afraid (read "nervous") or tired, and fails at something, they tend to retry the same or similar solutions - thus getting stuck in a rut. This can be applied to life, and gives me some food for thought - I think it has been picked up by some self-help book authors too.
But also, and slightly off on a tangent, relating more to my previous post, what may also affect the outcome is how well they know the language/tools, and thus how well they can match the problem (or sub-problems) to existing solutions. For example, regarding optimization, because of the way it is written, the lisp solution above is amenable to memoization.
I agree with jonathan opinion about this article Jonathan what you meant by memoization is it suppose to be memorization or something else?
ReplyDelete