Richard Plews A.Aruliah The future of computing

parkagendaΗλεκτρονική - Συσκευές

2 Νοε 2013 (πριν από 4 χρόνια και 2 μήνες)

82 εμφανίσεις

Richard Plews

A.Aruliah

The future of computing



M
odern culture is
producing

a world of consumers
desperate

for faster
computers to run applications
requiring

immense amounts of mathematical
calculations in as s
hort amount of time as possible.
Today’s

fas
test computers are
used in

tasks like
rendering
animated graphic
s, molecular modelling
and physical
simulations such as fluid dynamic calculations
.

Since the birth of the computer, t
hese complex
processes have been

completed with the
coordinated

effort of
a vast num
ber of simple true/false decisions
determined by the logical states of charged and discharged transistors
, a solitary
transistor holding one ‘bit’ of information
.

The speed of a computer is limited mainly by the number of transistors on the
Cent
ral Processing Unit

-

often referred to as the ‘cpu chip’. The more transistors that
can fit on a chip, the more processing power a computer has. The closer these
transistors are, the faster signals can be transferred between them
. Improvements in
manufact
uring processes have led to the number of transistors on a chip doubling
every 18
-
24 months from 1965, increasing the count from 2,300 to over 100 million.

The current process of chip manufacture is called ‘lithography’, in which
circuits are etched onto s
ilicon with light
.

T
he wavelength of
light used has

been
decreasing as technology improves to allow
increasingly
intricate

circuits
.

This
is an
expensive method;

many of the chips have

minuscule flaws
; ordinarily, even a few
defects can ruin an entire chip
, with the only resolution being the costly combination
of precision equipment and sterile environments.

T
he result of the high density of components is
also

leading to problems from
overheating

in the chip
s
-

which use more power to create heat in themsel
ves than
they actually do work.
However

the final barrier with this production

method
essentially
comes when transistor sizes approach that where
quantum

tunnelling will
occur (~5nm), allowing electrons to jump between two separated copper conductors,
shor
ting the chip.

This limit is fast being approached, and if computers are to continue to
increase in power, other methods of processor production need to be found. The
easiest way to overcome this is to simply use more
processors;

a method employed
by supe
rcomputers and
is a recent

consumer market

availability. The problem with
multi
-
core processing is the fearsomely high costs

Hewlett Packard has developed a new method of producing transistors
dubbed

‘cross
-
bar architecture’, in which
a set of parallel na
noscale wires are laid
atop another set of parallel wires at approximately a 90 degree angle, sandwiching a
layer of electrically switchable material in between. Where the material becomes
trapped between the crossing wires, they can form a switch
.

Some co
nsider the most significant
advancement for computers
to

be

quantum computing

, where a single atom denotes one bit of information. The
remarkable feature of this is the ability for a third electronic state; in addition to the
two distinct electronic stat
es, the atom
(in this case called a ‘qubit’)
can also be in a
coherent superposition

of the two states


according to quantum mechanics.
Research in this field is young but fast developing
, with predictions of exponential
data storage capacity, i.e. three
qubits can store 8 different numbers at once, four
qubits can store 16 different numbers at once, and so on; in general L qubits can
store 2
L

numbers at once.