Here's a neat charting showing how long it takes to do something involving data. I bolded the important ones:
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 100 ns (25)
Main memory reference 100 ns
Compress 1K bytes with Zippy 10,000 ns (3,000)
Send 2K bytes over 1 Gbps network 20,000 ns
Read 1 MB sequentially from memory 250,000 ns
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from network 10,000,000 ns
Read 1 MB sequentially from disk 30,000,000 ns (20,000,000)
Send packet CA->Netherlands->CA 150,000,000 ns
The L1/L2 caches are on your CPU. As you can see we're severely disk and network bound when it comes to performance. We're always going to be network bound until someone figures out how to make data travel faster than the speed of light.
Internet apps are going to be the future, but they are always going to be delayed in some way. The server might be able to get the data you want in a few milliseconds but it's still going to take about 75ms to send a piece of information from NY to California with today's current wired infrastructure.