I'm surprised that everyone is so focused on the hardware aspect of the problem. Given time, (say, within 50 years maximum), there will be more than enough computational power to simulate the brain. However, to put it bluntly, hardware has never really been the problem. Lazy computer scientists simply use it as a crutch to justify their lack of feasible AI code.
For example, if there was a solid idea of how to simulate a human brain, it could be written now, and run on current machines. It may cause even the best super computers to plod along while running the code, but it would still run. Also, it's not like the routines that are constructed heavily rely upon new algorithmic techniques.
For example, consider the field of computer vision, which, at it's most fundamental level, has the same problems to solve as those faced by true AI researchers. Facial recognition software (see
Eigenface) uses 170 year old Mathematical techniques (
eigenvalues), to solve the problem of recognizing distinct features of faces. Moreover, the calculation of this data is essentially the extent of the algorithm! No "new computer science paradigms" were required.
Prior to this, I recall the cries of critics in the field, stating that facial recognition software wouldn't be feasible until computers sped up considerably, due to the predicted amount of data necessary to uniquely identify a face. Granted, this software is still in its infancy, and not without its problems, but even in its early state, it can analyze hundred of individuals per second.
To conclude, I would just like to state that the hardware will eventually have the capacity to simulate the brain. However, I doubt that we will see a true simulated brain for a while, unless some of these AI researchers get their acts together, and stop depending on hardware to fix their problems.