Electricity and Magnetism-ANS

blockmindlessUrban and Civil

Nov 16, 2013 (3 years and 10 months ago)

88 views

ELECTRICITY AND

MAGNETISM

5 MARK
S:

1.Electromagnetism

Electromagnetism

is the branch of
science

concerned with the

forces

that
occur between

electrically
charged
particles. In electromagnetic theory these forces are explained using
electromagnetic fields
.
Electromagnetic force is one of the four

fundamental interactions

in

nature
, the other three being
the

strong interaction
, the

weak
interaction

and

gravitation
.

The word is a compound from two
Greek

terms, ἢλεκτρον,

ēlektron
, "
amber
" (as electrostatic phenomena
were first described as properties of amber by the philosopher

Thales
), and μαγνήτης,

magnētēs
,
"magne
t" (the magnetic stones found in antiquity in the vicinity of the Greek city of

Magnesia
,
in

Lydia
,
Asia Minor
).

Electromagnetism is the interaction responsible for practically all the phenomena encountered in daily life,
with the exception of gravity. Ordinary matter takes its form as a
result of

intermolecular forces

between
individual

molecules

in matter.

Electrons

are bound by electromagnetic wave mechanics into orbitals
around

atomic nuclei

to form

atoms
, which are the building blocks of molecules. This governs the
processes involved in
chemistry
, which arise from interactions between the

electrons

of neighboring
atoms, which are in turn determined by the interaction between electromagnetic force and the momentum
of the electrons.

Electromagnetism manifests as both

electric fields

and

magnetic fields
. Both fields are simply different
aspects of electromagnetism, and henc
e are intrinsically related. Thus, a changing electric field generates
a magnetic field; conversely a changing magnetic field generates an electric field. This effect is
called

electromagnetic induction
, and is the basis of operation for

electrical generators
,

induction motors
,
and

transformers
. Mathematically speaking, magnetic fields and electric fields are convertible with relative
motion as a 2nd
-
order

tensor

or

bivector
.

Electric fields

are the caus
e of several common phenomena, such as

electric potential

(such as the
voltage of a battery) and

electric current

(such as the flow of electricity through a flashlight).

Magnetic
fields

are the cause of the force associated with

magnets
.

In

quantum electrodynamics
, electromagnetic interactions between charged particles can be calculated
using the metho
d of

Feynman diagrams
, in which we picture

messenger particles

called

virtual
photons

being exchanged between charged particles. This method can be derived from the field picture
through

perturbation theory
.


2.Classical electrodynamics

The scientist

William Gilbert

proposed,

in his

De Magnete

(1600), that electricity and magnetism, while
both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed
that lightning

strikes had the ability to disturb a compass needle, but the link between lightning and
electricity was not confirmed until

Benjamin Franklin
's proposed experiments in 1
752. One of the first to
discover and publish a link between man
-
made electric current and magnetism was

Romagnosi
, who in
1802 noticed that connecting a wire across a

voltaic pile

deflected a nearby

compass

needle. However,
the effect did not become widely known until 1820, when Ørsted performed a
similar
experiment.
[2]

Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the
subject on a mathematical foundation.

A theory of electromagn
etism, known as

classical electromagnetism
, was developed by
various

physicists

ov
er the course of the 19th century, culminating in the work of

James Clerk Maxwell
,
who unified the preceding developments into a single theory and discovered the elec
tromagnetic nature
of light. In classical electromagnetism, the electromagnetic field obeys a set of equations known
as
Maxwell's equations
, and the electromagnetic
force is given by the

Lorentz force law
.

One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with

classical
mechanics
, but it is compatible with special relativity. According to Maxwell's equations, the

speed of
light

in a v
acuum is a universal constant, dependent only on the
electrical permittivity

and

magneti
c
permeability

of

free space
. This violates

Galilean invariance
, a long
-
standing cornerstone
of classical
mechanics. One way to reconcile the two theories is to assume the existence of a

luminiferous
aether

through which the light propagates. However, subsequ
ent experimental efforts failed to detect the
presence of the aether. After important contributions of
Hendrik Lorentz

and

Henri Poincaré
, in 1905,
Albert Einstein solved the problem with the introduction of special relativity, which replaces classical
kinematics with a new theory of kinematics that is compatible with classical electromagnetism
. (For more
information, see

History of special relativity
.)

In addition, relativity theory shows that in moving frames of reference a magnetic fi
eld transforms to a field
with a nonzero electric component and vice versa; thus firmly showing that they are two sides of the same
coin, and thus the term "electromagnetism". (For more information, see

Classical electromagnetism and
special relativity
.)


3.Types of circuits

Analog circuits

Most

analog

electronic appliances, such as

radio

receivers, are constructed from combinations of a few
types of basic circuits. Analog circuits use a continuous range of voltage as oppos
ed to discrete levels as
in digital circuits.

The number of different analog circuits so far devised is huge, especially because a 'circuit' can be
defined as anything from a single component, to systems containing thousands of components.

Analog circuits
are sometimes called

linear circuits

although many non
-
linear effects are used in analog
circuits such as mixers, modulators, etc. Good examples of analog circuits include vacu
um tube and
transistor amplifiers, operational amplifiers and oscillators.

One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or
even microprocessor techniques to improve performance. This type of circuit

is usually called "mixed
signal" rather than analog or digital.

Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of
both linear and non
-
linear operation. An example is the comparator which takes in a

continuous range of
voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier
can take on the characteristics of a controlled
switch

hav
ing essentially two levels of output.

Digital circuits

Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the
most common physical representation of

Boolean algebra
, and are the basis of all digital computers. To
most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context
of digital circuits. Most digital circuits use a binary s
ystem with two voltage levels labeled "0" and "1".
Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High".
However, some systems use the reverse definition ("0" is "High") or are current based.

Ternary

(with
three states) logic has been studied, and some prototype computers made.

Computers
, electronic

clocks
,
and

programmable logic controllers

(used to control industrial proce
sses) are constructed
of

digital

circuits.

20 MARK
S:

1.Integral equation solvers

The discrete dipole approximation

The

discrete dipole approximation

is a flexible technique for computing scattering and absorption by
targets of arbitrary

geometry
. The formulation is ba
sed on integral form of Maxwell equations. The DDA is
an approximation of the continuum target by a finite array of polarizable points. The points acquire

dipole
moments

in respo
nse to the local electric field. The dipoles of course interact with one another via their
electric fields, so the DDA is also sometimes referred to as the coupled

dipole

approximation. The
re
sulting linear system of equations is commonly solved using

conjugate gradient

iterations. The
discretization matrix has symmetries (the integral form of Maxwell equati
ons has form of convolution)
enabling

Fast Fourier Transform

to multiply matrix times vector during conjugate gradient iterations.

Method of moments (MoM) or bo
undary element method (BEM)

The

method of moments (MoM)

or

boundar
y element method

(BEM)

is a numerical computational
method of solving linear partial differential equations which have been formulated as

integral
equations

(i.e. in

bou
ndary integral

form). It can be applied in many areas of engineering and science
including

fluid mechanics
,

acoustics
,

electromagnetics
,

fracture mechanics
, and

plasticity
.

MoM has become more popular since the 1980s. Because it requires calculating only boundary values,
rather than values throughout the space, it is significantly more efficient in t
erms of computational
resources for problems with a small surface/volume ratio. Conceptually, it works by constructing a "mesh"
over the modeled surface. However, for many problems, BEM are significantly less efficient than volume
-
discretization methods (
finite element method
,

finite difference method
,

finite volume method
). Boundary
element formulations typically give rise to fully populated matrices. This means that the storage
requirements and computational time will tend to

grow according to the square of the problem size. By
contrast, finite element matrices are typically banded (elements are only locally connected) and the
storage requirements for the system matrices typically grow linearly with the problem size. Compressi
on
techniques (
e.g.

multipole expansions or adaptive cross approximation/hierarchical matrices) can be used
to ameliorate these problems, though at the cost of added complexity and with a success
-
rate that
depends heavily on the nature and geometry of the
problem.

BEM is applicable to problems for which

Green's functions

can be calculated. These usually involve fields
in

linear

homogeneous

media. This places considerable restrictions on the range and generality of
problems suitable for boundary elements. Nonlinearities

can be included in the formulation, although they
generally introduce volume integrals which require the volume to be discretized before solution, removing
an oft
-
cited advantage of BEM.

Fast multipole method (FMM)

The

fast multipole method

(FMM)

is an alternative to MoM or Ewald summation. It is an accurate
simulation technique and requires less memory and processor power than MoM. The FMM was first
introduced by

Greengard

and

Rokhlin
[3]
[4]

and is based on the
multipole expansion

technique. The first
application of the FMM in computational electromagnetics was by Engheta et al.(1992).
[5]

FMM can als
o
be used to accelerate MoM.

Partial element equivalent circuit (PEEC) method

The

partial element equivalent circuit

(PEEC)

is a 3D full
-
wave modeling method suitable for
combined

electromagnetic

and

circuit
analysis. Unl
ike MoM, PEEC is a full

spectrum

method valid
from

dc

to the maximum

frequency

determined by the meshing. In the PEEC method, the

integral
equation

is interpreted as

Kirchhoff's voltage law

applied to a basic PEEC cell which results in a complete
circuit solution for 3D geometries. The equivalent circuit formulation allows for additional

SPICE

type
circuit elements to be easily included. Further, the models and the analysis apply to both the time and the
frequency domains. The circuit equations resulting from the PEEC model are easily constructed using

a
modified

loop analysis

(MLA) or

modified nodal analysis

(MNA) formulation. Be
sides providing a direct
current solution, it has several other advantages over a MoM analysis for this class of problems since any
type of circuit element can be included in a straightforward way with appropriate matrix stamps. The
PEEC method has recentl
y been extended to include nonorthogonal geometries.

This model extension,
which is consistent with the classical

orthogonal

formulation, includes the Manhattan representation of the
g
eometries in addition to the more general

quadrilateral

and

hexahedral

elements. This helps in keeping
the

number of unknowns at a minimum and thus reduces computational time for nonorthogonal
geometries


2.Differential equation solvers

Finite
-
difference time
-
domain (FDTD)

Finite
-
difference time
-
domain

(FDTD)

is a popular CEM technique. It is easy to understand. It has an
exceptionally simple implementation for a full wave solver. It is at least an order of magnitude less work to
implement a basic FDTD
solver than either an FEM or MoM solver. FDTD is the only technique where
one person can realistically implement oneself in a reasonable time frame, but even then, this will be for a
quite specific problem.

[1]

Since it is a time
-
domain method, solutions can cover a wide frequency range
with a single simulation run, provided the time step is small enough to satisfy the

Nyquist

Shannon
sampling theorem

for the desired highest frequency.

FDTD belongs in the general class of grid
-
based differential time
-
domain numerical modeling
methods.

Maxwell's equations

(in

partial differential

form) are
modified to central
-
difference equations,
discretized, and implemented in software. The equations are solved in a cyclic manner: the

electric field

is
solved at a given instant

in time, then the

magnetic field

is solved at the next instant in time, and the
process is repeated over and over again.

The basic FDTD algorithm traces back to a seminal 1966

paper by Kane Yee in

IEEE Transactions on
Antennas and Propagation
.

Allen Taflove

originated the descriptor "Finite
-
difference time
-
domain" and its
corresponding "FDTD" acronym in a 1980 paper in IEEE Transactions on Electromagnetic Compatibility.
Since about 1990, FDTD techniques have emer
ged as the primary means to model many scientific and
engineering problems addressing electromagnetic wave interactions with material structures. An effective
technique based on a time
-
domain finite
-
volume discretization procedure wsa introduced by
Mohamma
dian et al. in 1991Current FDTD modeling applications range from near
-
DC (ultralow
-
frequency
geophysics involving the entire Earth
-
ionosphere

waveguide) through
microwaves

(radar signature
technology, antennas, wireless communications devices, digital interconnects, biomedical
imaging/treatment) to visible light (
photonic crystals
, nanoplasmonics,

solitons
, and

biophotonics
).
Approximately 30 comme
rcial and university
-
developed software suites are available.

Multiresolution time
-
domain (MRTD)

MRTD is an adaptive alternative to the finite difference time domain method (FDTD) based
on

wa
velet

analysis.

Finite element method (FEM)

The

finite element method

(FEM)

is used to find approximate solution of

partial differential
equations

(PDE) and

integral equations
. The solution approach is based either on eli
minating the
differential equation completely (steady state problems), or rendering the PDE into an equivalent

ordinary
differential equation
, w
hich is then solved using standard techniques such as

finite differences
, etc.

In solving

partial differential equations
, the primary challenge is to create an equation which approximates
the equation to be studied, but which is

numerically stable
, meaning that errors in the input data and
intermediate calculations do not accumulate and destroy the meaning of the resulting output. There are
many ways of doing this, with various advantages and disadvantages. The Finite Element Met
hod is a
good choice for solving partial differential equations over complex domains or when the desired precision
varies over the entire domain.

Finite integration technique (FIT)

The

finite integration technique (FIT)

is a spatial discretization scheme t
o numerically solve
electromagnetic field problems in time and frequency domain. It preserves basic

topological

properties of
the continuous equations such as conservation of charge
and energy. FIT was proposed in 1977
by

Thomas Weiland

and has been enhanced continually over the years.
[9]

This method covers the full
range of electromagnetics (from static up to high frequency) and optic applications and is the basis for
commercial simulation tools

The basic idea of this approach is to apply the Maxwell equati
ons in integral form to a set of staggered
grids. This method stands out due to high flexibility in geometric modeling and boundary handling as well
as incorporation of arbitrary material distributions and material properties such as

anisotropy
, non
-
linearity
and dispersion. Furthermore, the use of a consistent dual orthogonal grid (e.g.
Cartesian grid
) in
co
njunction with an explicit time integration scheme (e.g. leap
-
frog
-
scheme) leads to compute and
memory
-
efficient algorithms, which are especially adapted for transient field analysis in

radio
frequency

(RF) applications.

Pseudospectral time domain (PSTD)

This class of marching
-
in
-
time computational techniques for Maxwell's equations uses either discrete
Fourier or

Chebyshev transforms
to calculate the spatial derivatives of the electric and magnetic field
vector components that are arranged in either a 2
-
D grid or 3
-
D lattice of unit
cells. PSTD causes
negligible numerical phase velocity anisotropy errors relative to FDTD, and therefore allows problems of
much greater electrical size to be modeled.
[11]

Pseudo
-
spectral spatial domain (PSSD)

PSSD solves Maxwell's equations by propagating them forward in a chosen spatial direction. The fields
are therefore held as a function of time, and (possibly) any transverse spatial dimensions. The method is
pseudo
-
spectral because temporal derivatives are calculated in the frequency domain with the aid of
FFTs. Because the fields are held as functions of time, this enables arbitrary dispersion in the
propagation medium to be rapidly and accurately modelled wi
th minimal effort.
[12]

However, the choice to
propagate forward in space (rather than in time) brings with it some subtleties, particularly if reflections
are imp
ortant.
[13]

Transmission line matrix (TLM)

Transm
ission line matrix

(TLM) can be formulated in several means as a direct set of lumped elements
solvable directly by a circuit solver (ala SPICE,

HSPICE
, et al.), as a custom network of elemen
ts or via
a

scattering matrix

approach. TLM is a very flexible analysis strategy akin to FDTD in capabilities, though
more codes tend to be available with FDTD engines.

L
ocally
-
One
-
Dimensional FDTD (LOD
-
FDTD)

This is an implicit method. In this method, in two
-
dimensional case, Maxwell equations are computed in
two steps, whereas in three
-
dimensional case Maxwell equations are divided into three spatial coordinate
direction
s. Stability and dispersion analysis of the three
-
dimensional LOD
-
FDTD method have been
discussed in detail.


3.Types of electronics noise

Thermal noise

Johnson

Nyquist noise
[1]

(sometimes

thermal
,

Johnson

or

Nyquist noise
) is unavoidable, and
generated by the random thermal motion of charge carriers (usually

electrons
), inside an

electrical
conductor
, which happens regardless of any applied

voltage
.

Thermal noise is approximately

white
, meaning that its

power spectral density

is nearly equal throughout
the

frequency spectrum
. The amplitude of the signal has very nearly a

Gaussian

probability density
function
. A communication system affected by thermal noise is often modeled as an

additive white
Gaussian noise

(AWGN) channel.

The

root mean square

(RMS) voltage due to thermal noise

, generated in a resistance

R

(
ohms
) over
bandwidth Δ
f

(
hertz
), is given by


where

k
B

is

Boltzmann's

constant

(
joules

per

kelvin
) and

T

is the resistor's
absolute

temperature

(kelvin).

As the amount of thermal noise generated depends upon the temperature of the circuit, very sensitive
circuits such as

preamplifiers

in
radio telescopes

are sometimes cooled in

liquid nitrogen

to reduce
the noise level.

Shot noise

Shot noise in

electronic devices consists of unavoidable random statistical fluctuations of the

electric
current

in an electrical

conductor
. Random fluctuations are inherent when current flows, as the
current is a flow of discrete charges (
electrons
).

Flicker noise

Flicker noise,
also known as

1/
f

noise
, is a signal or process with a

frequency spectrum

that falls off
steadily into the higher frequencies, with a

pink

spectrum. It occurs in almost all electronic devices,
and results from a variety of effects, though always related to a direct current.

Burst noise

Burst noise consists of sudden step
-
like transitions between two
or more levels (non
-
Gaussian
), as
high as several hundred

microvolts
, at random and unpredictable times. Each shift in

offset voltage or
current lasts for several milliseconds, and the intervals between pulses tend to be in the

audio

range
(less than 100

Hz
), leading to the term

popcorn noise

for the popping or crackling sounds it
produces in audio circuits.

Avalanche noise

Avalanche noise is the noise produced when a junction diode is operated at the onset of

avalanche
breakdown
, a

semiconductor junction

phenomenon in which carriers in a high vo
ltage gradient
develop sufficient energy to dislodge additional carriers through physical impact, creating ragged
current flows.

The

noise level

in an electronic system is typically measured as an electrical
power

N

in

watts

or

dBm
, a

root mean square

(RMS) voltage (identical to the noise

standard
deviation
) in volts,

dBμV

or a

mean squared error

(MSE) in volts squared. Noise may also be
characterized by its

probability distribution

and

noise spectral density

N
0
(
f
) in watts per hertz.

A noise signal is typically considered as a linear additi
on to a useful information signal. Typical signal
quality measures involving noise are

signal
-
to
-
noise ratio

(SNR or

S
/
N
),

signal
-
to
-
quantization noise
ratio

(SQNR) in

analog
-
to
-
digital coversi
on

and compression,

peak signal
-
to
-
noise ratio

(PSNR) in
image and video coding,

E
b
/
N
0

in

digital transmission,

carrier to noise ratio

(CNR) before the detector
in carrier
-
modulated systems, and

noise figure

in cascaded amplifiers.

Noise is a random process, characterized by

stochastic

properties such as its

variance
,

distribution
,
and

spectral

density
. The spectral distribution of noise can vary with

frequency
, so its power density
is measured in watts per hertz (W/Hz). Since the power in a

resistive
element is proportional to the
square of the voltage across it, noise voltage (density) can be described by taking the square root of
the noise power density, resulting in volts per root hertz (
).

Integrated circuit

devices, such
as

operational amplifiers

commonly quote

equivalent input noise

level in these terms (at room
temperature).

Noise power is measured in Watts or

decibels

(dB) relative to a standard power, usually indicated by
adding a suffix after dB. Examples of electrical noise
-
level measurement units
are

dBu
,

dBm0
,

dBrn
,

dBrnC
, and dBrn(
f
1



f
2
), dBrn(144
-
line
).

Noise levels are usually viewed in opposition to

signal levels

and so are often se
en as part of
a

signal
-
to
-
noise ratio

(SNR). Telecommunication systems strive to increase the ratio of signal level
to noise level in order to effectively transmi
t data. In practice, if the transmitted signal falls below the
level of the noise (often designated as the

noise floor
) in the system, data can no longer be decoded
at the receiver.
Noise in telecommunication systems is a product of both internal and external
sources to the system


4.Bulk power transmission

Engineers design transmission networks to transport the energy as efficiently as feasible, while at the
same time taking into acc
ount economic factors, network safety and redundancy. These networks use
components such as power lines, cables,

circuit breakers
, switches and

transformers
. The transmission
network is usually administered on a regional basis by an entity such as a

regional transmission
organization

or

transmission system operator
.

Transmission efficiency is hugely improved by devices that increas
e the voltage, and proportionately
reduce the current in the conductors, thus keeping the power transmitted nearly equal to the power input.
The reduced current flowing through the line reduces the losses in the conductors. According to

Joule's
Law
, energy losses are directly proportional to the square of the current. Thus, reducing the current by a
factor of 2 will lower the energy lost to conductor resistance by a factor of 4.

Thi
s change in voltage is usually achieved in AC circuits using a

step
-
up

transformer
.

HVDC
systems
require relatively costly conversion equipment which may be economically justified for particular projects,
but are less common currently.

A transmission grid is a network of

power stations
, transmission lines, and

substations
. Energy is usually
transmitted within a grid with

three
-
phase

AC
. Single
-
phase AC is used only for distribution to end users
since it is not usable for lar
ge polyphase

induction motors
. In the 19th century, two
-
phase transmission
was used but required either four wires or three wires with unequal currents. Higher order phase sy
stems
require more than three wires, but deliver marginal benefits.

The capital cost of electric power stations is so high, and electric demand is so variable, that it is often
cheaper to import some portion of the needed power than to generate it locally.

Because nearby loads
are often correlated (hot weather in the Southwest portion of the US might cause many people to use air
conditioners), electricity often comes from distant sources. Because of the economics of load
balancing,

wide area transmission grids

now span across countries and even large portions of continents.
The web of interconnections between power producers and consumers ensures that powe
r can flow,
even if a few links are inoperative.

The unvarying (or slowly varying over many hours) portion of the electric demand is known as the

base
load

and is

generally served best by large facilities (which are therefore efficient due to economies of
scale) with low variable costs for fuel and operations. Such facilities might be nuclear or coal
-
fired power
stations, or hydroelectric, while other renewable ene
rgy sources such as

concentrated solar
thermal

and

geothermal power

have the pot
ential to provide base load power. Renewable energy sources
such as solar photovoltaics, wind, wave, and tidal are, due to their intermittency, not considered "base
load" but can still add power to the grid. The remaining power demand, if any, is supplied
by

peaking
power plants
, which are typically smaller, faster
-
responding, and higher cost sources, such as combined
cycle or combustion turbine plants fueled by natura
l gas.
.

Long
-
distance transmission of electricity (thousands of kilometers) is cheap and efficient, with costs of
US$0.005

0.02/kWh (compared to annual averaged large producer costs of US$0.01

0.025/kWh, retail
rates upwards of US$0.10/kWh, and multiples o
f retail for instantaneous suppliers at unpredicted highest
demand moments).
[7]

Thus distant suppliers can be cheaper than local sources
(e.g., New York City buys
a lot of electricity from Canada). Multiple

local sources

(even if more expensive and infrequently used)
can make the transmission grid more fault tolerant to weather and other disasters that can disconnect
distant suppliers.

Long

distance transmission allows remote renewable energy resources to be used to displace fossil fuel
consumption. Hydro and wind sources cannot be moved closer to populous cities, and solar costs are
lowest in remote areas where local power needs are minimal
. Connection costs alone can determine
whether any particular renewable alternative is economically sensible. Costs can be prohibitive for
transmission lines, but various proposals for massive infrastructure investment in high capacity, very long
distance

super grid

transmission networks could be recovered with modest usage fees.

Grid input

At the

power station
s

the energy is produced at a relatively low voltage between about 2.3

kV and 30

kV,
depending on the size of the unit. The generator terminal voltage is then stepped up by the power
station

transformer

to a higher

voltage

(115

kV to 765

kV AC, varying by the transmission system and by
country) for transmission over long distances.

Losses

Transmitting electricity at hi
gh voltage reduces the fraction of energy lost to

resistance
, which averages
around 7%.
[8
]

For a given amount of power, a higher voltage reduces the current and thus the

resistive
losses

in the conductor. For example, raising the voltage by a factor of 10 reduces
the current by a
corresponding factor of 10 and therefore the

I
2
R

losses by a factor of 100, provided the same sized
conductors are used in both cases. Even if the conductor size (cross
-
sectional area) is reduced 10
-
fold to
match the lower current the

I
2
R

losses are still reduced 10
-
fold. Long distance transmission is typically
done with overhead lines at voltages of 115 to 1,200

kV. At extremely high voltages, more than 2,000

kV
between conductor and ground,

corona discharge

losses are so large that they can offset the lower
resistance loss in the line conductors. Measures to reduce corona losses include conductors having large
diameter; often hollow to save weight,
[9]

or bundles of two or more conductors.

Transmission and distribution losses in the USA were estimated at 6.6% in 1997
[10]

and 6.5% in
2007.
[10]

In general, losses are estimated from the discrepancy between energy

produced (as reported by
power plants) and energy sold to end customers; the difference between what is produced and what is
consumed constitute transmission and distribution losses.

As of 1980, the longest cost
-
effective distance for DC electricity was d
etermined to be 7,000

km
(4,300

mi). For AC it was 4,000

km (2,500

mi), though all transmission lines in use today are substantially
shorter.
[7]

In an alternating current circuit, the

inductance

and

capacitance

of the phase conductors can be
signi
ficant. The currents that flow in these components of the circuit

impedance

constitute

reactive power
,
which transmits no energy to the load. Reactive current causes extra losses in the transmission circuit.
The ratio of real power (transmitted to the load) to apparent power is the

power factor
. As reactive current
increases, the reactive power increases and the power factor decreases. For systems with low power
factors, losses are higher than for systems with high power factors.

Utilities add capacitor banks and
other components (such as

phase
-
shifting transformers
;

static VAR compensators
; physical

transposition
of the phase conductors
; and

flexible AC transmission systems
, FACTS) throughout the system to control
reactive power flow for reduction of losses and stabilization of system voltage.