New programmers often don't appreciate the power of algorithms. They have one criteria for a piece of code: does it calculate the right answer? If it does, they go with it. Every CS program has at least one class on algorithms. However, if you are a self-taught programmer, you may not have been exposed to the ideas. This article stems from a conversation with a friend who is trying to learn programming on his own. This is a simple introduction to the concepts of algorithms and an explanation for why you might care about them. Most of the time the programs we write deal with small enough data sets and fast enough processors that our choice of algorithms doesn't make a material difference. However, with a large enough number of items the choice of a poor algorithm can make a tremendous difference.
Here is an example. When I was first programming, I wrote a Java servlet to access a database of all the DVDs we were testing. This was very early on and there were only a few hundred DVDs in print. The servlet had one page which would display each of the DVDs along with some vital information in a table format. I used simple string functions to build up the HTML describing the table. With 200 discs, everything worked fine. We were buying all of the DVDs available at the time and eventually our database grew to encompass over a thousand discs. Somewhere along the line, I noticed that bringing up the "all discs" page was taking a really long time. It took something like 3 minutes to display the page. I did some investigation and figured out the problem. Each time I concatenated another piece of text or table markup, the entire string was copied to a new location along with the new information. The same string was being copied tens of thousands of times. The solution was to move to using a StringBuffer object instead of the built-in strings. StringBuffer utilizes a better algorithm than does string. The time to render the page dropped from 3 minutes to 30 seconds.
Before beginning, we should define what we mean by algorithm. An algorithm is a set of instructions for accomplishing a task. Everything we write in software is an algorithm of some sort but we usually reserve the term for the parts of our programs that accomplish particularly interesting tasks. More often than not, the reference is to reusable tasks like searching, sorting, etc. These algorithms are language and platform agnostic. That is, the concepts will be equally valid on virtually all programming languages, operating systems, and processors.
If we're going to compare the performance characteristics of different algorithms, we'll need some benchmark to measure them against. We could just run the algorithms and see how long they take, but this only tells us how they behave on that particular data set. Instead, we use something referred to a "Big-O" notation* to describe the runtime characteristics on a function. This describes how the behavior of the function (measured in time in all cases in this article) is affected by the size of the data set. For each value we add, how does the performance react? If we know how long it takes to do something to a sample set of size 1, can can then compare the time it takes for bigger sample sets. The notation is as follows: O(1), O(n), O(n^2), etc. O(1) indicates that the time is constant--that is, it is unrelated to the size of the input. O(n) says the time it takes increases linearly with the size of the input. O(n^2) indicates that the time grows as a square of the number of items. An algorithm which is in the class O(n) will run faster than one in the class O(n^2). For example, if it took 25 instructions to do the work for 1 item, an O(n^2) algorithm would take 250,000 instructions for 100 items. An O(n) algorithm would take just 2,500.
Let's look at some example algorithms and how they affect running time.
Searching for a Phone Number
Think about looking up someone's phone number in a phone book. How do you do that? We could just start on page 1 and look at each entry to see if it is the person we are looking for. This will take a while. If we consider just pages, not actual names, we'll have to examine and average of half the pages before we find the number we're looking for. In the worst case, we'll have to look at each page. My phone book has 969 pages in the residential listings. That means it will take us an average of 485 pages to find the name we want. If we're looking for Fred Zylstra, it's going to take us 969 pages. This is an O(n) algorithm. This is how many programmers search for an item in a list but this isn't how anyone actually searches in the phone book. We open to a page in the middle and if what we're looking for is higher in the alphabet, we turn half way toward the front of the book. If it is lower, we turn half way toward the end. We keep repeating this process until we find the name. This algorithm will find the name we're looking for by examining at most 10 pages. This is O(log2(n)). Here is a graph of the two functions:
The green line is the linear algorithm. The red line represents the way most people actually search.
Calculating Fibonacci Numbers
This one is slightly less real world but it demonstrates well the advantage of algorithms. Fibonacci numbers are a series of numbers where each number is the sum of the two previous numbers. Formally, F(n) = F(n-1) + F(n-2) where F(0) = 0 and F(1) = 1. These numbers get large really quickly. The 10th number is 55. The 30th number is 832,040. The 100th is 354,224,848,179,261,915,075. There are two ways to calculate a Fibonacci number. This sequence is often used when teaching recursive programming. The recursive algorithm looks something like this:
UInt64 CalcFibonacciRecursive(UInt64 number)
{
if (number == 0) return 0;
if (number == 1) return 1;
return CalcFibonacciRecursive(number - 1) + CalcFibonacciRecursive(number - 2);
}
There is also a non-recursive algorithm. It looks like this:
UInt64 CalcFibonacciIterative(UInt64 number)
{
if (number == 0) return 0;
UInt64 value = 1;
UInt64 F2 = 0;
UInt64 F1 = 1;
for (ulong i = 1; i < number; i++)
{
value = F2 + F1;
F2 = F1;
F1 = value;
}
return value;
}
The recursive solution looks a lot more elegant but how does it perform? let's try calculating F(40). I tried higher numbers but even F(50) takes minutes to calculate
Algorithm | Elapsed Time (ms) |
Recursive | 5523 |
Iterative | <1 |
The race isn't even close. Why the big difference? Let's examine what is going on here. To calculate the 5th Fibonacci number using the recursive algorithm, we have to calculate F4 and F3. To calculate F4, we have to calculate F3 and F2. But wait. We already calculated F3 directly. Unfortunately, we end up doing it again. Here is the complete tree of actions for calculating F5. Notice all of the redundant work.
If you look closely, you'll see that this is basically a binary tree. Each time we increment the number by 1, we double the amount of the work we have to do. We have to do 2^n amount of work. Thus, the recursive algorithm is O(2^n) whereas the iterative algorithm is O(n). Here is the graph:
Summary
As you can see, the choice of algorithm can have a drastic effect on the running time of your program. This is why it is wise to have some familiarity with algorithms when you are programming. Determining the Big-O value for an algorithm can be complex. Luckily, any algorithm book will tell you the classification of an algorithm so you can look it up instead of having to dust off your math skills. Spend some time studying an algorithm book or just do some web searches when you go to use one. Pay attention to these values. They will be important some day.
I tried to make this easy to understand but if something isn't clear, please ask. I'll be happy to explain.
*Big-O notation is actually a misnomer. What computer scientists call Big-O notation is more properly called Big-Theta notation. Big-O describes the upper bound, but that bound doesn't have to be close. We could just call all of our algorithms O(2^n). While accurate, this doesn't help us much. Rather we'd like to know what the lowest Big-O value is which still describes the function. That is Big-Theta.
As a 'self taught' I recommend the book 'Introduction to Algorithms' to catch up...
ReplyDeleteWhile your point about needing to learn algorithms is entirely correct, there's one thing that's missing in your example: function memoization would allow elegance and speed at the cost of memory in the Fibonacci. Trade-offs like this are at the heart of choosing between algorithms.
ReplyDeleteYou also don't discuss the need to explore new algorithms. Perhaps that's fodder for a different post altogether?
O(1) fibonacci algorithm:
ReplyDeleteconst double phi = (1.0 + sqrt(5.0)) / 2.0;
int closest_int(double d) { return (int)(d + 0.5); }
int fibonacci(int n) {
return closest_int(pow(phi, n) / sqrt(5.0));
}