David Patterson over at IEEE Spectrum has written an article entitled “The Trouble With Multicore.” Kudos to David for a very thorough and well thought out article.  He gives plenty of background on how we arrived at multicore processors, and some of the techniques and challenges that come with parallel processing.  The advent of multicore processing was pretty much a gamble on the part of the semiconductor manufacturers, although their hand was forced due to the power wall associated with increased processor speeds:

“[In 2005] the semiconductor industry threw the equivalent of a Hail Mary pass when it switched from making microprocessors run faster to putting more of them on a chip—doing so without any clear notion of how such devices would in general be programmed. The hope is that someone will be able to figure out how to do that, but at the moment, the ball is still in the air.”

Achieving sustained parallel performance with application codes is a major effort.  In research and engineering communities, we have had increased success, but with that comes a major outlay of time and resources.  Here are a couple of choice quotes in the article that indicate the increased effort it takes to exploit multicore processors:

In general, parallelism can work when you can afford to assemble a crack team of Ph.D.-level programmers to tackle a problem with many different tasks that depend very little on one another.

The La-Z-Boy era of program performance is now officially over, so programmers who care about performance must get up off their recliners and start making their programs parallel.

“The odds are still against the microprocessor industry squarely completing its risky Hail Mary pass and finding some all-encompassing way to convert every piece of software to run on many parallel processors.”  Toward the end of the article, we are left with three scenarios on how we might fare in the next ten years:

Scenario 1: The first is that we drop the ball.  That is, the practical number of cores per chip hits a ceiling, and the performance of microprocessors stops increasing.

Scenario 2: Another possibility is that a select few of us will be able to catch today’s risky Hail Mary pass. Perhaps only multi­media apps such as video games can exploit data-level parallelism and take advantage of the increasing number of cores.

Scenario 3: The most optimistic outcome, of course, is that someone figures out how to make dependable parallel software that works efficiently as the number of cores increases. That will provide the much-needed foundation for building the microprocessor hardware of the next 30 years.

Ah, but I think there is a fourth, more plausible scenario!

Scenario 4: The semiconductor industry will find some new process and material to build faster processors.  The era of multicore will not be over, but we will see the growth of core counts stabilize as individual cores increase in speed.  By the time a new manufacturing process is in place, many of the applications in enterprise, research, and engineering communities will have already been parallelized and will continue to require multiple cores.

My additional Scenario 4, is more of a hybrid between Scenario 1 and 2. The outcome of Scenario 1 is that the pressure and demand for faster processors and less cores will sustain continued research into new manufacturing methods.  In the mean time, some of us will be successful in porting our codes!  So, I think Scenario 4 is a pretty plausible outcome given the market pressures and the intervening time in which we have to do parallel software research.  I guess time will tell.

How do you think we will fare in the next ten years?  Which scenario do you think is the most likely?

And as always, check out my other HPC blogs over at HPCatDell.com.

About these ads