FUNDAMENTALS OF CMOS VLSI-ANS

amountdollElectronics - Devices

Nov 2, 2013 (3 years and 7 months ago)

91 views

FUNDAMENTALS OF CMOS

VLSI



GROUP A:

1.CMOS

Complementary metal

oxide

semiconductor

(
CMOS

is a technology for constructing

integrated circuits
.
CMOS technology is used
in

microprocessors
,

microcontrollers
,

static RAM
, and other

digital logic

circuits.
CMOS technology is also used for several analog circuits such as

image sensors

(
CMOS sensor
),

data
converte
rs
, and highly integrated

transceivers

for many types of communication.

Frank Wanlass

patented
CMOS in
1967 (
US patent 3,356,858
).

CMOS is also sometimes referred to as

complementary
-
symmetry metal

oxide

semiconductor

(or COS
-
MOS).
1

The words "complementary
-
symmetry" refer to the fact that the typical digital design style with CMOS
uses complementary and symmetrical pairs of

p
-
type

and

n
-
type

metal oxide semiconductor field effect
transistors

(MOSFETs) for logic functions.

Two important characteristics of CMOS devices are high

noise immunity

and low static

power consumption
.
Significant power is only drawn when the

transistors

in the CMOS device are switching between on and off
states. Consequently, CMOS devices do not produce as much

waste heat

as other forms of logic, for
example

transistor
-
transistor logic

(TTL) or

NMOS logic
. CMOS also allows a high density of logic

functions on
a chip. It was primarily for this reason that CMOS became the most used technology to be implemented
in
VLSI

chips.

The phrase "metal

oxide

semiconductor" is a reference to the physic
al structure of certain

field
-
effect
transistors
, having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of
a

semiconductor material
.

Aluminium

was once used but now the material is

polysilicon
. Other

metal
gates

have made a comeback with the advent of

high
-
k

dielectric mater
ials in the CMOS process, as
announced by IBM and Intel for the

45 nanometre

node and beyond.
2

2.

History

During
the 1920’s, several inventors attempted devices that were intended to control the current in solid
state diodes and convert them into triodes. Success, however, had to wait until after World War II, during
which the attempt to improve silicon and germanium

crystals for use as radar detectors led to
improvements both in fabrication and in the theoretical understanding of the quantum mechanical states
of carriers in semiconductors and after which the scientists who had been diverted to radar development
retur
ned to solid state device development. With the invention of transistors at Bell labs, in 1947, the field
of electronics got a new direction which shifted from power consuming vacuum tubes to solid state
devices.

With the small and effective transistor at
their hands, electrical engineers of the 50s saw the possibilities
of constructing far more advanced circuits than before. However, as the complexity of the circuits grew,
problems started arising.
1

Another problem was the size of the circuits. A complex circuit, like a computer, was dependent on
speed. If the components of the computer were too large or the wires interconnecting
them too long, the
electric signals couldn't travel fast enough through the circuit, thus making the computer too slow to be
effective.
1

Jack Kilby

at

Texas Instruments

found a solution to this problem in 1958. Kilby's idea w
as to make all the
components and the chip out of the same block (monolith) of semiconductor material. When the rest of
the workers returned from vacation, Kilby presented his new idea to his superiors. He was allowed to
build a test version of his circuit
. In September 1958, he had his first integrated circuit ready
1
. Although
the first integrated circuit was pretty crude and had

some problems, the idea was groundbreaking. By
making all the parts out of the same block of material and adding the metal needed to connect them as a
layer on top of it, there was no more need for individual discrete components. No more wires and
compone
nts had to be assembled manually. The circuits could be made smaller and the manufacturing
process could be automated. From here the idea of integrating all components on a single silicon wafer
came into existence and which led to development in small
-
scal
e integration (SSI) in the early 1960s,
medium
-
scale integration (MSI) in the late 1960s, and large
-
scale integration (LSI) and VLSI in the 1970s
and 1980s with tens of thousands of transistors on a single chip (later hundreds of thousands and now
millions
).
2

3.

Developments

The first semiconductor chips held two transistors each. Subsequent advances added more and more
transistors, and, as a consequence, more individ
ual functions or systems were integrated over time. The
first integrated circuits held only a few devices, perhaps as many as
ten

diodes
,

transistors
,

resistors

and

capacitors
, making it possible to fabricate one or more

logic gates

on
a single device. Now known retrospectively as

small
-
scale integration

(SSI),

improvements in technique
led to devices with hundreds of logic gates, known as

medium
-
scale integration

(MSI). Further
improvements led to

large
-
scale integration

(LSI), i.e. systems with at least a thousand logic gates.
Current technology has moved far past this mark and today's

microprocessors

have many millions of
gates and billions of individual transistors.

At one time, there was an effort to name and calibrate various levels of large
-
scale integration
above
VLSI. Terms like

ultra
-
large
-
scale integration

(ULSI) were used. But the huge number of gates and
transistors available on commo
n devices has rendered such fine distinctions moot. Terms suggesting
greater than VLSI levels of integration are no longer in widespread use.

As of early 2008, billion
-
transistor processors are commercially available. This is expected to become
more common
place as semiconductor fabrication moves from the current generation of

65

nm

processes
to the next

45

nm

generations (while experiencing new challenges such as increased variation
across

process corners
). A notable example is
Nvidia
's

280 series

GPU
. This GPU is unique in the fact
that almost all of its 1.4 billion transis
tors are used for logic, in contrast to the

Itanium
, whose large
transistor count is largely due to its 24 MB L3 cache. Current designs, unlike the earliest devices, use
extensive

design automation

and automated

logic synthesis

to

lay out

the transistors, enabling higher
levels of complexity in the resulting logic functionality. Certain high
-
performance logic blocks like the
SRAM (
static random
-
access memory
) cell, however, are still designed by hand to ensure the highest
efficiency. VLSI technology may be moving toward further radical miniaturization wi
th introduction
of

NEMS

technology.

GROUP B:

1.

Structured design

Structured VLSI design is a modular methodology originated by

Carver Mead

and

Lynn Conway

for
saving microchip area by minimizing the interconnect fabrics area. This is obtained by repetit
ive
arrangement of rectangular macro blocks which can be interconnected using

wiring by abutment
. An
example is

partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this
structuring may be achieved by hierarchical nesting.
cita
tion needed

Structured VLSI design had been popular in the early 1980s, but lost its popularity later because of the
advent of

placement and routing

tools wasting a lot of a
rea by

routing
, which is tolerated because of the
progress of

Moore's Law
. When introducing the

hardware description language

KARL in the mid'
1970s,

Reiner Hartenstein

coined the term "structured VLSI design" (originally as "structured LSI
design"), echoing

Edsger Dijkstra
's

structured programming

approach by procedure nesting to avoid
chaotic

spaghetti
-
structured
programs.

Challenges

As microprocessors become more complex due to

technology scaling
, microprocessor designers have
encountered several challenges which force them to think beyond the design plane, and look ahead to
post
-
silicon:



Power usage/Heat di
ssipation


As

threshold voltages

have ceased to scale with
advancing

process technology
,

dynamic power dissipation

has not scaled proportionally. Maintaining
logic complexity when scalin
g the design down only means that the power dissipation per area will go
up. This has given rise to techniques such as

dynamic voltage and frequency scaling

(DVFS) to
minimize overall power.



Proce
ss variation


As

photolithography

techniques tend closer to the fundamental laws of optics,
achieving high accuracy in

doping

concentrations and etched wires is becoming more difficult and
prone to errors due to variation. Designers now must simulate across multiple fabrication

pro
cess
corners

before a chip is certified ready for production.



Stricter design rules


Due to lithography and etch issues with scaling,

design
rules

for

layout

have become increasingly stringent. Designers must keep ever more of these rules in
mind while laying out custom circuits. The overhead for custom design is n
ow reaching a tipping
point, with many design houses opting to switch to

electronic design automation

(EDA) tools to
automate their design process.



Timing/design closure



As

clock frequencies

tend to scale up, designers are finding it more
d
ifficult to distribute and maintain low

clock skew

between these high frequency clocks across the
entire chip. This has led to a rising interest in

multicore

and

multiprocessor

architectures, since
an

overall speedup

can be obtained by lowering the clock frequency and distributing processing.



First
-
pass success


As

die

sizes shrink (due to sca
ling), and

wafer

sizes go up (to lower
manufacturing costs), the number of dies per wafer increases, and the complexity of making
suitable

photomasks

goes up rapidly. A

mask set

for a modern technology can cost several million
dollars. This non
-
recurring expense deters the old iterat
ive philosophy involving several "spin
-
cycles"
to find errors in silicon, and encourages first
-
pass silicon success. Several design philosophies have
been developed to aid this new design flow, including design for manufacturing (
DFM
), design for test
(
DFT
), and

Design for X
.

2.

Inversion

CMOS circuits are constructed in such a way that all PMOS transistors must have either an input from the
voltage source or from another PMOS transistor. Similarly, all NMOS transistors must have either
an input
from ground or from another NMOS transistor. The composition of a PMOS transistor creates
low

resistance

between its source and drain contacts when a low

gate

voltage

is applied and high
resistance when a high gate voltage is applied. On the other hand, the composition of an NMOS
tr
ansistor creates high resistance between source and drain when a low gate voltage is applied and low
resistance when a high gate voltage is applied. CMOS accomplishes current reduction by complementing
every nMOSFET with a pMOSFET and connecting both gates

and both drains together. A high voltage on
the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct while a low voltage on
the gates causes the reverse. This arrangement greatly reduces power consumption and heat generation.
However, du
ring the switching time both MOSFETs conduct briefly as the gate voltage goes from one
state to another. This induces a brief spike in power consumption and becomes a serious issue at high
frequencies.

The image on the right shows what happens when an inpu
t is connected to both a PMOS transistor (top
of diagram) and an NMOS transistor (bottom of diagram). When the voltage of input A is low, the NMOS
transistor's channel is in a high resistance state. This limits the current that can flow from Q to ground.
T
he PMOS transistor's channel is in a low resistance state and much more current can flow from the
supply to the output. Because the resistance between the supply voltage and Q is low, the voltage drop
between the supply voltage and Q due to a current drawn

from Q is small. The output therefore registers
a high voltage.

On the other hand, when the voltage of input A is high, the PMOS transistor is in an OFF (high
resistance) state so it would limit the current flowing from the positive supply to the output,
while the
NMOS transistor is in an ON (low resistance) state, allowing the output to drain to ground. Because the
resistance between Q and ground is low, the voltage drop due to a current drawn into Q placing Q above
ground is small. This low drop results
in the output registering a low voltage.

In short, the outputs of the PMOS and NMOS transistors are complementary such that when the input is
low, the output is high, and when the input is high, the output is low. Because of this behaviour of input
and out
put, the CMOS circuits' output is the inverse of the input.

A note on nomenclature:

4

The power supplies for CMOS are called V
DD

and V
SS
, or V
CC

and
Ground(GND) depending on the manufacture
r. V
DD

and V
SS

are carryovers from conventional MOS
circuits and stand for the drain and source supplies. These do not apply directly to CMOS since both
supplies are really source supplies. V
CC

and Ground are carryovers from TTL logic and that nomenclature

has been retained with the introduction of the 54C/74C line of CMOS.

3.

Iddq testing

From Wikipedia, the free encyclopedia

Iddq testing

is a method for testing CMOS

i
ntegrated circuits

for the presence of manufacturing faults. It relies
on measuring the supply current (Idd) in the quiescent state (when the circuit is not switching and inputs are
held at static values). The current consumed in the state is commonly cal
led Iddq for Idd (quiescent) and hence
the name.

Iddq testing uses the principle that in a correctly operating quiescent

CMOS

digital circuit
, there is no static
current path between the power supply and ground, except for a small amount of leakage. Many
common

semiconductor

manufacturing

faults will cause the current to increase by orders of magnitude, which
can be easily detected. This has the advantage of checking the chip for many possible faults with one
measurement. Another advantage is that it may catch faults that ar
e not found by conventional

stuck
-
at
fault

test vectors.

Iddq testing is somewhat more complex than just measuring the supply current. If a line is shorted to Vdd, for
example,

it will still draw no extra current if the gate driving the signal is attempting to set it to '1'. However, a
different vector set that attempts to set the signal to 0 will show a large increase in quiescent current, signalling
a bad part. Typical Iddq te
st vector sets may have 20 or so vectors. Note that Iddq test vectors require
only

controllability
, and not

observability
. This is because the observability is through the shared power supply
connection.

Iddq testing has many advantages:



It is a simple and direct test that can identify physical defects.



The area and design time overhead are very low.



Test generation is fast.



Test application time is fast since the vector sets are small.



It catches some defects that other tests, particularly

stuck
-
at

logic tests, do not.

Drawback: Compared to scan testing, Iddq testing is time consuming, and then more
expensive, since is
achieved by current measurements that take much more time than reading digital pins in mass production.

4.

Two analytical models

When the objective of the analysis is to determine what stock to buy and at what price, there are two
basic

methodologies

1.

Fundamental analysis maintains that markets may misprice a security in the short run but that the
"correct" price will eventually be reached.
Profits

ca
n be made by purchasing the mispriced
security and then waiting for the market to recognize its "mistake" and reprice the security.

2.

Technical analysis

maintains that al
l information is reflected already in the stock price. Trends
'are your friend' and sentiment changes predate and predict trend changes. Investors' emotional
responses to price movements lead to recognizable price chart patterns. Technical analysis does
no
t care what the 'value' of a stock is. Their price predictions are only extrapolations from
historical price patterns.

Investors can use any or all of these different but somewhat complementary methods for stock picking.
For example many fundamental invest
ors use technicals for deciding entry and exit points. Many
technical investors use fundamentals to limit their universe of possible stock to 'good' companies.

The choice of stock analysis is determined by the investor's belief in the different paradigms f
or "how the
stock market works". See the discussions at
efficient
-
market hypothesis
,

random walk hypothesis
,

capital
asset pricing model
,

Fed model

Theory of Equity Valuation,

Market
-
based valuation
, and
Behav
ioral
finance
.

Fundamental analysis includes:

1.

Economic analysis

2.

Industry analysis

3.

Company analysis

On the basis of these three analyses the intrinsic value of the shares are determined. This is considered
as the true value of the share. If the intrinsic v
alue is higher than the market price it is recommended to
buy the share . If it is equal to market price hold the share and if it is less than the market price sell the
shares.