Is the CPU dead? The fuss about GPU computing, and why it's a big deal

ruralrompSoftware and s/w Development

Dec 2, 2013 (3 years and 11 months ago)

68 views

FEATURE


Is the CPU dead? The fuss about GPU
computing, and why it's a big deal

by
Darien Graham
-
Smith


on

Jun 3, 2010


Tags:
cpu

|
graphics

|
intel

|
amd

|
cuda

|
westmere

|
pc

|
nvidia


2 Comments



Could the CPU become a bit
-
part player in the future of computing? Nvidia's
bold new vision for graphics could change PCs forever

As Intel rolls out its 32nm processor
s codenamed Westmere
-

employing the most compact and
efficient processor cores ever created
-

you might think we've reached the pinnacle of computing
technology. Certainly, Intel presents its new CPUs as the very heart of next
-
generation PC
systems.


Howe
ver, not everyone agrees. While Intel is touting Westmere, Nvidia has launched new
graphics hardware,
codenamed Fermi
, promising not o
nly to deliver next
-
generation visuals but
also to rival
-

and even shame
-

the computational power of the conventional CPU.


On the face of it, that sounds absurdly ambitious. But the company has persuasive technical
arguments on its side, demonstrating
how the power of a GPU can now be used for much more
than gaming. Could 2010 be the year in which the CPU is overshadowed by graphics hardware?


In Nvidia's vision, visual and mathematical operations will be handled by a
massively parallel, highly programmable GPU

Stand back CPU


The immensely powerful CPUs typically found in modern

PCs simply aren't necessary for the
majority of office and internet applications. Yes, if you want to crunch a huge database of
numbers, or edit high
-
definition video, a fast CPU will help. But the rise of the netbook
demonstrates that for many everyday t
asks a processor as cheap and simple as the Intel Atom is
perfectly adequate.


All the same, Atom
-
based netbooks typically fail to satisfy when it comes to visuals. They lack the
processing power to decode and display high
-
resolution media files, and moder
n games are out
of the question.


Nvidia believes you can get the best of both worlds by partnering a lightweight Atom with a
discrete GPU that specialises in these specific tasks
-

in March 2009, it launched just such a
hybrid platform under the name Ion
.


"Ion translates to smooth HD video [on an Atom system], including streaming video from
YouTube or Hulu, with Flash 10.1, and support for popular games like The Sims and World of
Warcraft," claims marketing manager Ben Berraondo. And here at PC Authorit
y we've been
impressed by the graphical capabilities of low
-
power Ion
-
based systems, including the
recommended Samsung N510 netbook and the Asus Eee Box EB1501 nettop.


In the future, it's easy to imagine that in this segment of the market the CPU might b
ecome
almost irrelevant, with graphics hardware providing a more significant differentiation between
models.


The general
-
purpose GPU


The 480 execution cores in this Nvidia GTX 295 offer huge parallel processing
potential, compared with eight cores in a top of the line Intel Core i7 CPU


This, however, is only one part of the story. If
a GPU can help out a CPU by decoding video and
rendering 3D scenes, there's no reason why its processing abilities can't be turned to other
purposes as well. The concept of using a graphics processor for non
-
graphical calculations is
known as general
-
purpo
se GPU computing (GPGPU), or just GPU computing.


And it makes a lot of sense. Intel's top of the line Core i7 processor presents eight execution
cores to the operating system (four of which are virtual cores simulated via Hyper
-
Threading),
while a cheap $
100 graphics card offers ten times as many processing units
-

each one drawing a
fraction of the power consumed by a CPU core. Move up to the high end and you find cards such
as the Nvidia GTX 295 (
read our GTX 295 review here
) integrating a massive 480 cores.


It's clear that by exploiting these devices developers can harness a level of parallel processing
horsepower that a CPU can't hope to compete with. "Right now, th
e industry is seeing previously
unheard
-
of speed
-
ups by simply moving from the CPU to a GPU," declares Berraondo. "Video
encoding, for example, can be ten times faster, or more, on a GPU compared to a CPU."


GPU computing has more serious applications, too
. One example is Folding@home, a distributed
computing project that seeks cures for medical conditions such as cancer, cystic fibrosis and
Parkinson's disease. In 2008, Nvidia reported that, according to independent analysis, running
the Folding@home calcu
lations on the company's GPUs yielded results "140 times faster than on
some of today's traditional CPUs".

Come in CUDA


Nvidia's secret weapon is its Compute Unified Device Architecture (CUDA), first unveiled in 2007,
which extends familiar programming l
anguages
-

including C, Java and Fortran
-

with functions
that make it easier for developers to offload calculations onto an Nvidia GPU (
read how Adobe's
new Premiere Pro CS5 takes advantage of CUDA

for video editing).


CUDA isn't the only way to create GPU
-
based software: Microsoft's DirectCompute API, for
example, offers programming functions that can be run on any modern GPU, as does t
he
OpenCL framework originally developed by Apple. Indeed, Nvidia's rival AMD argues that it's
these open standards, rather than the closed world of CUDA, that represent the future.


But CUDA had a headstart of almost two years over these more open interfa
ces. And as CUDA
general manager Sanford Russell explained at the company's 2009 GPU Technology
Conference, CUDA is inherently a more appealing choice for developers, owing to its support for
familiar languages, and C in particular.


"You go out there," R
ussell suggested, gesturing through the window to the streets of Silicon
Valley, "and you say ‘everybody who programs in C, stand on this side of the street. And
everybody who uses an API, go stand over there.' There'd be a few people on the API side and a

whole bunch on the C side."


Nvidia plans to make CUDA even more powerful with a new range of graphics cards based on its
innovative Fermi architecture (
read our review of the first Fermi cards
). Fermi cards will be the
first to
-

in the words of CEO Jen
-
Hsun Huang
-

"treat both graphics and code as equal citizens".
Technical improvements include a shared L2 cache and an onboard thread

scheduler that will
help Fermi run code more efficiently than any existing GPU.


It all adds up to a vision of the future in which the CPU progressively becomes a lightweight,
commoditised part of the overall PC architecture, while both visual and mathema
tical operations
are handled by a massively parallel, highly programmable GPU. Is this where we're heading? Are
CPU manufacturers about to become bit players in the PC industry? Read more about the
significance of CUDA

for high speed computing.


Limitations of the GPU


The most important slot? Intel is still confident the CPU performance is very
competitive with graphics cards for mainstream transcoding


GPU technology still can't compete with a traditional CPU in certain a
reas. Modern CPUs have
huge instruction sets and advanced features, such as out
-
of
-
order execution and speculative
branching, to ensure clock cycles aren't wasted. The CPU is consequently far better at executing
the complex single
-
threaded code that repres
ents the bulk of applications.


And while large
-
scale data processing may be just what's needed for research projects and
enterprise applications, when it comes to home and business usage there aren't many tasks that
really benefit, outside of the familiar

examples of video editing and transcoding.


This crucial point isn't lost on Intel. When
PC Authority

asked the company's product marketing
engineer Mike Abel whether the company considered CUDA a threat, he visibly failed to shake in
his boots. "In some

cases, you might say you're seeing CUDA delivering benefits over the CPU,"
he acknowledged, "but when I look at DirectCompute, and things like that, I think ‘this is
something that's intended for high
-
end workstations'.


High
-
performance computing is a v
ery different segment to mainstream computing, and for the
mainstream there are many different ways for a software application to deliver performance, such
as multithreading."


Abel couldn't talk in depth about GPU computing for high
-
performance applicatio
ns, but this was
perhaps to be expected: Intel is currently licking its wounds over the failure of its GPU
-
style card,
Larrabee, the development of which was halted in December. But that failure itself is highly
suggestive of the limitations of GPU computi
ng: Larrabee's cores were more advanced than
Nvidia's stream processors, enabling them to perform more complex tasks
-

which also made
them more expensive, more power
-
hungry, and harder to program.


For mainstream computing, though, Abel was happy to make

clear that, so far as Intel's
concerned, the power of its CPUs is sufficient to make GPU computing irrelevant. "The CPU is at
the forefront of the PC, and we believe it will remain there," he affirmed. "If someone wants to
play games, they'll get a discre
te graphics card.


But for transcoding, should I go buy a $200 card, or is my CPU good enough? From what we've
seen, and what we've tested, the CPU is very competitive, delivering performance in some cases
better than what Nvidia and AMD are able to deliv
er."


There may be some value in DirectCompute, he conceded. "There may be special usages
-

niche
areas where it might make sense. I'm not going to say ‘hey, that's something that Intel will never
support'.


We think that today we have a very competitive

solution, but we always try to best utilise all the
resources on a processor. And if there may be creative ways to do that, we'll certainly evaluate
them."


It's a confident stance but, as Endpoint Technologies analyst Roger Kay notes, Intel can hardly
sa
y anything else. "Intel isn't really in a position to produce a massively parallel processor," Kay
told
PC Authority
. "It needs more time to produce a chip with both the performance and power
characteristics that will allow it to compete in the highly para
llel computing space."

The next round


It will be fascinating to see how Intel and Nvidia's
different strategies play out during 2010. But
before we reach the endgame, both parties have a few more moves to play. Nvidia has already
shown that CUDA is serious business on existing hardware, and Fermi is set to open up further
GPU
-
based programming p
ossibilities.

Intel, meanwhile, hopes to outflank GPU computing by continuing to beef up its processors,
helping them handle the media tasks that are the bread and butter of discrete GPUs. "It was
already announced that Sandy Bridge [Intel's next
-
generatio
n architecture, due for launch in
2011] will have Advanced Vector Extensions, which could greatly improve floating point
performance," confirmed Intel PR manager Radoslaw Walczyk.


"And we have some other stuff too, which we won't talk about yet. But you
can be sure that with
new generations of hardware, Intel will introduce new technologies that will definitely benefit
multimedia operations."


Ultimately, though, whatever ends up on the desktop, there's no doubt that GPU computing has
changed the game for
ever. As Roger Kay concludes, "the man in the street won't use GPU
computing any time soon
-

but he'll immediately enjoy the products of it: 3D animation effects
done by major studios, drugs discovered using GPU computing and, since oil and gas companies
w
ill use it for exploration, perhaps even the price he pays for energy."



2 comments in this discuss
ion


"The path forward involves the use of USB3.0 (or Light Peek). With a suitable high speed bus, it is possible to
add another CPU to a system
-

or, in this case, a GPU. Your basic netbook becomes a ..."

By
SA Penguin




Related Articles



Laptop buyers guide: all you need to know to pick your perfect laptop






Graphics Superguide: GeForce GTX200, CUDA, Dunia, Far Cry 2, S.T.A.L.K.E.R Clear
Sky






Upgrading: power up your PC


Latest Features



Is Mac the future of computing?





eBook reader buyer's guide: four challengers to the Kindle





The Big List

-

97 Free Apps You Can't Do Without





PC Building: Intel's Turbo Boost vs AMD's Turbo Core

0
diggs
digg


Copyright © 2009 Dennis Publishing


This article appeared in the
May, 2010

issue of PC Authority.