There's a balancing act in testing between automation and manual testing. Over my time at Microsoft I've seen the pendulum swing back and forth between extensive manual testing and almost complete automation. As I've written before, the best answer lies somewhere in the middle. The question then becomes how to decide what to automate and what to test manually. Before answering that question, a quick diversion into the advantages of each model will be useful.
Manual testing is the most flexible. Test case development is very cheap. While skilled professionals will find more, a baseline of testing can be done with very little skill. Verification of a bug is often instantaneous. In the hands of a professional tester, a product will give up its bugs quickly. It takes very little time to try dragging a window in a particular way or entering values into an input box. This has the additional advantage of making the testing more timely. There is little delay between code being ready to test and the tests being run. The difficulty with manual testing is its cost over time. Each iteration requires human time and humans are quite costly. The cumulative costs over time can be very, very large. If the test team is capable of testing version 1.0 in the development time but nothing is automated, it will take the test team all of the 1.0 testing time plus time for the new 2.0 features to release version 2.0. Version 3.0 will cost 3x as much to test as the first version, and so on. The cost increases are unsustainable.
On the opposite end of the spectrum is the automated test. Development of automated tests is expensive. It requires a skilled programmer some number of hours for each test case. Verification of the bug can require substantial investment. The up front costs are high. The difficulty of development means that there is a measurable lag between code being ready for test and the tests being ready to run. The advantage comes in the repeated nature of the testing. With a good test system, running the tests becomes nearly free.
With those advantages and disadvantages in mind, a decision framework becomes obvious. If testing only needs to happen a small number of times, it should be done manually. If it needs to be run regularly--daily or even every milestone--it should be automated. A rule of thumb might be that if there is a need for a test to be run more than twice during a product cycle, it should be automated. Because of the delay in test development, most features should be tested manually once before writing the test automation for the feature. This is for two reasons. First, manual exploratory testing will almost always be more thorough. The cost of test development ensures this. Second, it is more timely. Finding bugs right away while development still has the code in their minds is best. Do thorough exploratory testing of each feature immediately. Afterwards, automate the major cases.
This means that some tests will be run up front and never again. That is acceptable. If the right automated tests are chosen, they will act as canaries and detect when things go wrong later. It is also inevitable. Automating everything is too costly. The project won't wait for all that testing to be written. Those who say they automate everything are likely fooling themselves. There are a lot of cases that they never write and thus are never run.