I've read that PC architecture (Intel or AMD) clock rates have not risen at the historical rate since about 2005 so I'm wondering what single threaded throughput gains there have been over the past 7 years and how that compares to the past 10 or 15 years. Specifically, I'd be interested in seeing typical application throughput (say, Windows and a single threaded benchmark) graphed and seeing if there's a kink around 2005.
I would guess there would be gains even in the past 7 years due to:
1. memory rates, example: DDR3 being faster than DDR2
2. faster bus rates on the motherboard
4. additions to the x86-64 instruction set
5. slight gains in the clock rate, albeit not as great as in the 1990s
I'm not sure what else. Of course multi-threaded applications and multi-threaded operating system operations to take advantage of multiple cores are potentially a big variable and in my mind doesn't really deserve to be classed in the same category as any of the above because they are actually software improvements more than anything else. Hmmm... the audacity to say adding 3 cores to 1 is a software improvement. A little ironic, but really adding cores is sort of like moving the goal posts and it still doesn't work without software changes (which might not be feasible for some things).
I'd appreciate any perspective on this or if you've seen published examinations on this question. I did read one essay from a Microsoft compiler expert on this subject but that's fairly dated now. At the time that article came out the main point was throughput gains are harder to come by if you don't re-write for multiple threads. I wonder if things have been as bad as people used to expect.