Tugas 2

munchsistersAI and Robotics

Oct 17, 2013 (3 years and 10 months ago)

82 views

Supercomputers can be defined as the most advanced and powerful computers, or array of computers,
in existence at the time of their construction. Supercomputers are used to solve problems that are too
complex or too massive for standard computers, like
calculating how individual molecules move in a
tornado, or forecasting detailed weather patterns. Some supercomputers are single computers
consisting of multiple processors; others are clusters of computers that work together.

Supercomputers were first dev
eloped in the

early 1970s when Seymour Cray introduced the “Cray 1”
supercomputer. Because microprocessors were not yet available, the processor consisted of individual
integrated circuits. Successive generations of supercomputers were developed by Cray an
d became
more powerful with each version. Other companies like IBM, NEC, Texas Instruments and Unisys began
to design and manufacture more powerful and faster computers after the introduction of the Cray 1.

Today's fastest supercomputers include IBM's Blue

Gene and ASCI Purple, SCC's Beowulf, and Cray's SV2.
These supercomputers are usually designed to carry out specific tasks.


For example, IBM's ASCI Purple is
a $250 million supercomputer built for the Department of Energy (DOE). This computer, with a pea
k
speed of 467 teraflops, is used to simulate aging and the operation of nuclear weapons. Future
supercomputer designs might incorporate the use of entirely new technologies of circuit miniaturization
that could include new storage devices and data transfe
r systems. Scientists at UCLA are currently
working on computer processor and circuit designs involving a series of molecules that behave like
transistors. By incorporating this technology, new designs might include processors 10,000 times
smaller, yet muc
h more powerful than any current models.

Supercomputer computational power is rated in FLOPS (Floating Point Operations per Second). The first
commercially available supercomputers reached speeds of 10 to 100 million FLOPS. The next generation
of supercomp
uters (some of which are presently in the early stages of development) is predicted to
break the petaflop level. This would represent computing power more than 1,000 times faster than a
teraflop machine. To put these processing speeds in perspective, a rel
atively old supercomputer such as
the Cray C90 (built in the mid to late 1990s) has a processing speed of
only

8 gigaflops. It can solve a
problem, which takes a personal computer a few hours, in .002 seconds!

Supercomputer design varies from model to mode
l. Generally, there are vector computers and parallel
computers. Vector computers use a very fast data “pipeline” to move data from components and
memory in the computer to a central processor. Parallel computers use multiple processors, each with
their ow
n memory banks, to 'split up' data intensive tasks.

A good analogy to contrast vector and parallel computers is that a vector computer could be
represented as a single person solving a series of 20 math problems in consecutive order; while a
parallel compu
ter could be represented as 20 people, each solving one math problem in the series. Even
if the single person (vector) were a master mathematician, 20 people would be able to finish the series
much quicker. Other major differences between vector and parall
el processors include how data is
handled and how each machine allocates memory. A vector machine is usually a single super
-
fast
processor with the entire computer's memory allocated to its operation. A parallel machine has multiple
processors, each with i
ts own memory. Vector machines are easier to program, while parallel machines,
with data from multiple processors (in some cases greater than 10,000 processors), can be tricky to
orchestrate. To continue the analogy, 20 people working together (parallel) c
ould have trouble with
communication of data between them, whereas a single person (vector) would entirely avoid these
communication complexities.

Recently, parallel vector computers have been developed to take
advantage of both designs.

Supercomputers

are

called upon to perform the most compute
-
intensive tasks of modern times. As
supercomputers have developed in the last 30 years, so have the tasks they typically perform. Modeling
of real world complex systems such as fluid dynamics, weather patterns, seis
mic activity prediction, and
nuclear explosion dynamics represent the most modern adaptations of supercomputers. Other tasks
include human genome sequencing, credit card transaction processing, and the design and testing of
modern aircraft.

Although there
are numerous companies that manufacture supercomputers, information about
purchasing one is not always easy to find on the Internet. The price tag for a custom
-
built
supercomputer can range anywhere from about $500,000 for a Beowulf system, up to millions
of dollars
for the newest and fastest supercomputers.

Scyld Computing Corporation (SCC) provides a Web site (
www.scyld.com
) with detailed information
about their Beowulf Operating System and the computers developed to
allow multiple systems to
operate under one platform. IBM

has produced, and continues to produce, some of the most cutting
-
edge supercomputer technology. Their “Blue Gene” supercomputer, being constructed in collaboration
with Lawrence Livermore National L
abs, is expected to run 15 times faster (at 200 teraflops) than their
current supercomputers. IBM is also currently working on what they call a "self
-
aware" supercomputer,
named "Blue Sky", for The National Center for Atmospheric Research (NCAR) in Boulder
, Colorado. The
Blue Sky will be used to work on colossal computing problems such as weather prediction.


Additionally,
this supercomputer can self
-
repair, requiring no human intervention. Intel has developed a line of
supercomputers known as Intel TFLOPS.

Supercomputers that use thousands of Pentium Pro processors
in a parallel configuration to meet the supercomputing demands of their customers.