Salı, Nisan 05, 2005

 

Dueling Multicores: Intel and AMD Fight For the Future


It now appears certain that 64 bit multicore CPUs - processors with two or more engines sharing the workload - will be standard equipment in servers, workstations, and even desktop and mobile computers as soon as 2007. The computer you build or purchase two years from today may be an entirely different beast than the one with which you're now familiar. The reason is no surprise: the evolutionary chart for single-core microprocessors has finally run its course, after clashing head-on with the laws of physics. The race has thus begun for manufacturers to produce the industry standard for dual-core, and the payoffs in both performance and profits could be huge.

Both Intel and AMD Corp. are in a position to bring forth the next industry standard architecture. But for the first time in its history, Intel finds itself falling behind, with rival AMD standing an even chance - some say, better than even - to lay the cornerstone. Last August, AMD successfully demonstrated the first x86-compatible 64 bit dual-core platform, currently slated for general release in 2Q 2005; it extended its demonstration to participants in LinuxWorld last February. In response, Intel is introducing plans for new Pentium Extreme Edition and dual-core Pentium D series, whose 64 bit EM64T instruction set is basically compatible with - essentially identical to - the AMD64 set that AMD introduced in the Opteron and Athlon 64.

FROM INTEL’S POINT OF VIEW
Intel is now claiming it is chasing not a rival manufacturer, but rather an economic principle. The guiding theme of Intel's last Spring IDF (Intel Developer Forum) was Moore's Law. Based on observations made by Intel founder Gordon Moore, and published in 1965, its conclusive principle is that a CPU's transistor count can be economically increased by a factor of two every two years. In the same way that the publicly propagated portion of relativity theory - E = mc2 - tends to be molded and reshaped to support almost any popular principle, Moore's Law has been reinterpreted over its 40-year history to refer to increases in processor's performance, to increases in clock speed, and in recent years, even to the productivity of the American white-collar workforce.

"It's all about Moore's Law," announced Intel Corp.'s Senior Vice President Pat Gelsinger at the company's most recent developers' forum. “But now with the change in Moore's Law, it's all about multicore architectures. ... By utilizing Moore's Law, with this transition to multicore, moving from hyperthreading to dual-core to multicore, we will deliver the fastest rate of performance improvement of our time.”

Why this focus? The reason is a simple one: it's no longer possible for new single-core processors, even with continually shrinking lithographic processes, to be clocked into the 3.5 - 4.0 GHz range without toasting themselves. So the only way possible for AMD and Intel to achieve continued performance gains at the rate we've currently enjoyed, is through parallelism. In short, new CPUs must divide their workloads among two or more processor engines. Intel is introducing four new dual-processing engine design platforms almost simultaneously this year.

The new model names are the Pentium D series : Smithfield, Lyndon, Averill, Yonah for notebooks and best Smithfield of all will be the Presler with 65 nm lithography.

For the Itanium 2 series , for Xeon server platform processors and for Xeon MT 4 socket servers intel will introduce new models with dual-core platforms.

Intel has been working this time to implement and support a "stepping-stone" technology, which would smooth the transition path from single-core x86 to dual-core EM64T. In 2002, Intel gambled that this stepping stone would be hyperthreading (HT). The immediate benefit of HT parallelism is that it doesn't require the software - the programs that constitute each thread - to be aware of any parallelism taking place whatsoever. Each thread, not "knowing" it runs in a split environment, "believes" to have the processor all to itself.

But comparing to dual-core technology, Insight 64 principal analyst Nathan Brookwood says : "I think hyperthreading was inherently somewhat limited in terms of performance benefits," he tells Tom's Hardware Guide. The reason, he states, is that while an HT thread "sees" that it has all the CPU's resources to itself, the CPU hasn't really replicated its resources for both threads. For example, since the L1 cache is so frequently polled by the processor, HT divides the cache in half and apportions each thread its own half. An apparently smaller cache provides a thread with a narrower view of memory, thus forcing it to refresh its contents far more often. "Therefore, dual-core comes a lot closer to providing 100% performance benefits," concludes Brookwood, "whereas hyperthreading typically [provides] 15-20% performance benefits."

For now, Intel is announcing HT technology on the desktop level beginning in 3Q 2005, in single-core Pentium 4 units clocked at up to 3.8 GHz with an 800 MHz front-side bus and 1 MB of L2 cache. Also in 3Q, the Pentium 4 Extreme Edition brand will be introduced, which may mark the first time that clock speed is lowered as part of a performance gain. The Extreme units will be clocked at just under 3.5 GHz (down from about 3.7), and will feature an optional 1.066 GHz front-side bus, and a standard 2 MB L3 cache.


FROM AMD’S POINT OF VIEW
AMD thinks that going to multicore is not through hyperthreading. "In our particular world," says commercial software strategist Margaret Lewis, "hyperthreading was not the right approach. Our architecture [AMD64] has been designed since 1999 with dual-core capability. Having two physical cores is going to provide you the potential for much better performance than having one core that's divided into two logical pieces, which is what hyperthreading does."

AMD's Pat Patla says that “we've been waiting for the 90 nm process so that we can produce a high-volume, economical dual-core chip for the masses. Just as we brought 64 bit computing to the masses when we launched Opteron almost two years ago, we're going to do the same thing to dual-core. We feel that this is the right way to bring the next level of computing performance to the end user; it's not through hyperthreading, because hyperthreading only addresses some of the issues around how to get better multitasking and multithreading."





References :
www.intel.com
www.amd.com
PC Forums-Guides

This page is powered by Blogger. Isn't yours?