According to this article:
Chuck Moore, an AMD senior fellow, made the case for the shift to a new software model based on heterogeneous collections of cores optimized for various tasks. He suggested computers should be more like cellphones, using a variety of specialty cores to run modular software scheduled by a high-level applications programming interface.
I second that entirely. When silicon is cheap, you tend to specialize. Today, we specialize entire machines. This is why it is OK to have in any cell phone today the power of a good top-of-the-line personal computer of the early to mid 1990s. Specialized designs can be much faster and economical than specialized ones. But I’m not sure that we’ll ever see thousands and thousands of pre-wired cores. The reason is that we are beginning to understand how to build customized floating-point circuits on a per-application basis (If the name in these papers rings a bell, it’s because the guy writing them is my brother…)
But even if you have a plethora of cores, the problem still remains of how to develop for these cores:
The only emerging consensus seems to be that multicore computing is facing a major crisis. In a recent EE Times article titled ‘Multicore puts screws to parallel-programming models’, AMD’s Chuck Moore is reported to have said that ‘the industry is in a little bit of a panic about how to program multicore processors, especially heterogeneous ones.
That’s where something like XL, concept programming or Intentional Programming may turn out to be indispensable. Why? Because these new techniques allow programmer to describe custom conceps, not just obsolete CPU models of the 1970s as we still find them in C and C++ (“memory accesses are cheap”, “every CPU knows about the C operators and only them”, and so on).