Variable step size
in LMS
using
Evolutionary
Computation:
VSSLMSEC
Ajjaiah
H.B.M,
Dr. Prabhakar V. Hunagund.
Research Scholar,
Professor,
Dept. of PG Studies
Dept. Of
Applied Electronic
,
and Research in Applied
Electronics,
GulbargaUniversity
,
Gulbarga,
prabhakar_hunagund@yahoo.co.in,
Tel No: 9902893340
,
hbmajay@gmail.com
.
Jnana Ganga, Gulbarga
University,
Gulbarga

585 106 Karnataka,
Abstract

Least
Mean Square (
LMS
)
algorithm
finds its importa
nce in
many applications
due to
its simplicity and robustness.
From past few
years a new frame work has been introduced
which includes variable step size least mean
square algorithm
(VSSLMS) and affine
projection algorithm
(
AP
). In this paper the
evolutionary
Computation
(EC)
Or Evolutionary
Programming
has
been
discussed.
It has shown
that the performance generated by this method is
robust and does not require any presetting of
invo
lved parameters in solutions based upon
statistical characteristics of signal.
Keywords:
AP,
EC (
EP)
, LMS
algorithm, variable
step
size,
I.
INRODUCTION
The characteristics such as the channel
equalizers
with short training time and high
tracking rate have been imposed in the recent
digital transmission
system. To
accomplish
this
task, it
tries to extract from the receive
signal the parameters related to the transmitted
information. The transmit signal pa
sses
through the channel before reaching the
receiver, or
in other words the transmit signal
convolves with
the channel.
When transmitting data example: fax, video
streams or
web contents
over the copper cable,
the
transmitted symbol suffer from inter
symbol
interference (
ISI
) which
is nothing
but
the
convolution of transmit signal with the
channel [2]. The fact that
they are band
limited; cellular communications
–
multipath
propagation. Obviously for reliable
digital
transmission system
it is crucial to reduce the
effect
of ISI and it is where the adaptive filters
come into picture.
Tow of the most
intensively
developing areas of digital transmission,
namely digital subscriber lines and cellular
communications
are strongly depen
den
t on the
realization of reliable channel
equalizers. One
of the
possible solutions is the implementation
of equalizer based on filter with finite
impulse
response
(FIR) employing the well known
LMS algorithm for adjusting its coefficie
nts.
The LMS algorithm is one of the most popular
algorithms in adaptive signal processing. Due
to its simplicity and robustness, it has been the
focus of much study and its implementation in
many applications. The popularity stems from
its relatively low
computational complexity,
good numerical stability, simple structure, and
ease of implementation in terms of hardware.
Usually, the adaptive algorithm consists of a
transfer filter for processing the input single
and an algorithm unit for update the transf
er
filter's coefficients. The Adaptive algorithm
unit represents some algorithm to update the
coefficients of the transfer filter. For LMS
algorithm, the method to update the
coefficients of the transfer filter is given as
follows:
w(n) = w(n

1) + Al * x(
n) * e(n)
........(1)
µ is the step of LMS algorithm. For the
LMS algorithm
µ is a constant.
II.
Development of LMS algorithm
.
In this paper we introduced a improved
method to obtain an optimal step

size and
algorithm for LMS. The
input signal enters
into
transfer filter co

efficient
and it
runs
which
minimizes the step

size error rate at
each iteration. The performance of the
proposed
algorithm is compared with LMS
algorithm [3].
A variation of gradient adaptive
step

size LMS algorithms is presented. They
propose a simplification to a class of the
studied algorithms [4]. Adaption in the
variable step size LMS proposed by[5] based
on weighting coefficients bias/variance trade
off. Authors in [6]
examine the stability of
VSLMS with uncorrelated stationary Gau
ssian
data. Most VSLMS described in the literature
use a data

dependent step

size, where the step

size either depends on the data before the
current time (prior step

size rule) or through
the current time (posterior step

size rule). It
has often been assum
ed that VSLMS
algorithms are stable (in the sense of mean

square bounded weights), provided that the
step

size is constrained to lie within the
corresponding stability region for the LMS
algorithm.
The analysis of these VSLMS
algorithms in the literature t
ypically proceeds
in two steps [7], [8]. First, a rigorous stability
analysis is attempted, apparently leading to
conditions for MSE bounded weights and
bounded MSE, and second, an approximate
performance analysis is carried out, including
convergence to a
nd characterization of the
asymptotic weight mean, covariance, and
MSE. Thus one can at least guarantee stability
(MS bounded weights) rigorously would seem
to support the performance analysis. Two
methods of variable step

size normalized least
mean square
(NLMS) and affine projection
algorithms (APA) with variable smoothing
factor have presented in [9]. With the
Simulation results they have illustrated that the
proposed algorithms have improvement in
convergence rate.
A.
Step size and its Importance.
As
already
we know, the
based on the
previous result
s
, the speed of convergence as
the step is increased.
We have seen that the
speed of convergence increases as the step size
is increased, up to values that are roughly
within a factor of 1
/
2 of the step size
stability
limits. Thus, if fast convergence is desired, one
should choose a large step size according to the
limits. However, we also observe that the mis
s
adjustment increases as the step size is
increased. Therefore, if highly accurate
estimates of the
filter coefficients are desired, a
small step size should be chosen. This classical
tradeoff in convergence speed versus the level
of error in steady state dominates the issue of
step size selection in many estimation
schemes. If the user knows that the
re
lationship between input signal
x(n)
and
desired signal
d(n)
is linear and time

invariant,
then one possible solution to the above tradeoff
is to choose a large step size initially to obtain
fast convergence, and then switch to a smaller
step size. The
point to switch to a smaller step
size is roughly when the excess MSE becomes
a small fraction (approximately 1
/
10th) of the
minimum MSE of the filter.
This method of
gear shifting
,
as it is commonly known, is part
of a larger class of time

varying step si
ze
methods.
Based on the previous result obtained, the
speed of convergence as the step is increased.
We have seen that the speed of convergence
increases as the step size is increased, up to
values that are roughly within a factor of 1
/
2 of
the step size
stability limits. Thus, if fast
convergence is desired, one should choose a
large step size according to the limits.
However, we also observe that the
misadjustment increases as the step size is
increased. Therefore, if highly accurate
estimates of the fil
ter coefficients are desired, a
small step size should be chosen. This classical
trade

off
in convergence speed versus the level
of error in steady state dominates the issue of
step size selection in many estimation
schemes. If the user knows that the
relationship between input signal
x(n)
and
desired signal
d(n)
is linear and time

invariant,
t
hen one possible solution to the above tradeoff
is to choose a large step size initially to obtain
fast convergence, and then switch to a smaller
step size. The point to switch to a smaller step
size is roughly when the excess MSE becomes
a small fraction
(approximately 1
/
10th) of the
minimum MSE of the filter.
This method of
gear shifting
,
as it is commonly known, is part
of a larger class of time

varying step size
methods.
III.
Evolutionary Programming
Evolutionary Programming is a Global
Optimization algorithm and is an instance of
an Evolutionary Algorithm from the field of
Evolutionary Computation. The approach is a
sibling of other Evolutionary Algorithms such
as the Genetic Algorithm, and Learning
C
lassifier Systems. It is sometimes confused
with Genetic Programming given the similarity
in name, and more recently it shows a strong
functional similarity to Evolution Strategies.
Evolutionary Programming is inspired by
the theory of evolution by means of natural
selection. Specifically, the technique is
inspired by macro

level or the species

level
process of evolution (phenotype, hereditary,
variation) and is not concerned with the
genetic mechanisms of evolution (genome,
chromosomes, genes, alleles).
A.
Metaphor
A population of a species reproduces
,
creating progeny with small phenotypical
variation. The progeny and the parents
compete based on their suitability to the
environment, w
here the generally more fit
members constitute the subsequent generation
and are provided with the opportunity to
reproduce themselves. This process repeats,
improving the adaptive fit between the species
and the environment.
The objective of the Evolutio
nary
Programming algorithm is to maximize the
suitability of a collection of candidate
solutions in the context of an objective
function from the domain. This objective is
pursued by using an adaptive model with
surrogates for the processes of evolution,
s
pecifically hereditary (reproduction with
variation) under competition. The
representation used for candidate solutions is
directly assessable by a cost or objective
function from the domain.
C.
Procedure
Algorithm (below) provides a pseudocode
listing of th
e Evolutionary Programming
algorithm for minimizing a cost function.
Input
: , ProblemSize, BoutSize
Output
:
Population InitializePopulation(, ProblemSize)
EvaluatePopulation(Population)
GetBestSolution(Population)
While
(StopCondition())
Children
For
(Population)
\
Mutate()
Children
End
Evalu
atePopulation(Children)
GetBestSolution(Children,
)
Union
Population+Children
For
(Union)
For
(
To
BoutSize)
RandomSelection(Union)
If
(Cost()<
Cost())
+1
End
End
End
Population SelectBestByWins(Union, )
End
Return
()
Pseudocode for Evolutionary Programming.
D.
Heuristics
The representation for candidate
solutions should be domain specific,
such as real
numbers for continuous
function optimization.
The sample size (bout size) for
tournament selection during
competition is commonly between 5%
and 10% of the population size.
Evolutionary Programming
traditionally only uses the mutation
operator to create ne
w candidate
solutions from existing candidate
solutions. The crossover operator that
is used in some other Evolutionary
Algorithms is not employed in
Evolutionary Programming.
Evolutionary Programming is
concerned with the linkage between
parent and child
candidate solutions
and is not concerned with surrogates
for genetic mechanisms.
Continuous function optimization is a
popular application for the approach,
where real

valued representations are
used with a Gaussian

based mutation
operator.
The mutation

sp
ecific parameters
used in the application of the
algorithm to continuous function
optimization can be adapted in
concert with the candidate
solutions.
IV.
Simulation for proposed model
To see the performance of LMSEV, for
channel, number of taps selected for equalizer
is 11 is taken. Input signal contains total 200
samples generated randomly through uniform
distribution shown in fig1. Gaussian noise
having zero mean and 0.01 standard dev
iation
added with input signal as shown in fig2.
And
fig 3. For Evolutionary Programming
channel
characteristics is given by the vector:
[0.05

0.063 0.088

0.126

0.25 0.9047
0.25 0 0.126 0.038 0.088]
Fig 1
.
Original input signal and Signal wi
th
noise from channel.
0
50
100
150
200
250
300
1
0.5
0
0.5
1
ORIGINAL INPUT SIGNAL
0
50
100
150
200
250
300
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
SIGNAL FROM CHANNEL WITH NOISE
Fig.2. Fixed step size performance of LMS
with step

size equal to .022 and 0.0088.
Fig .3. Defined step size by Evolutionary
programming for different population step

size
.
C
ONCLUSION
The problem of optimal variable step
size integrated with LMS algorithm has solved
with the involvement of evolutionary
programming. Presented method is robust and
does not require the statistical characteristics of
input signal as in the case of other existing
solutions. Very good convergenc
e and tracking
capability can be achieved automatically by
presented method. Performance of proposed
EVSSLMS also checked with different
population size and it has shown that with less
population performance is also equally well
and in result higher speed
of solution.
Acknowledgement:
With the full support of
Jyothy Institute of Technology and
Principal of
JIT.
R
EFERENCE
[1] I. Lee and J. M. Cioffi, “A Fast
Computation Algorithm for the Decision
Feedback Equalizer”
IEEE Trans. Commun
,
vol. 43, pp.
2742

2749, Nov. 1995
[2]
Wee

Peng Ang
Farhang

Boroujeny, B,”
A new class of gradient adaptive step size
LMS algorithm” “
Signal
Processing, IEEE Transactions o
n
,2001,
Volume:
49
Issue: 4
,
page(s):
805
–
810.
[3] Krstajic, B.
Stankovic, L.J.
Uskokovic,
Z.
“an approach to variable step size LMS
algorithm”.
:
Electronics Letters
, Aug 2002
,
Volume:
38
Issue: 16
,
On page(s):
927

928
[4] Saul B. Gelfand, Yongbin
Wei, James V.
Krogmeier,
,”
The Stability of Variable Step

Size LMS Algorithms” IEEE
TRANSACTIONS ON SIGNAL
PROCESSING, VOL. 47, NO. 12, December
1999
[5] R. Kwong and E. W. Johnston, “A variable
step size LMS algorithm,”
IEEE Trans. Signal
Processing
, v
ol. 40, pp. 1633
–
1642, July 1992.
[6] [11] V. J. Mathews and Z. Xie, “A
stochastic gradient adaptive filter with gradient
adaptive step size,”
IEEE Trans. Signal
Processing
, vol. 41,pp. 2075
–
2087, June 1993.
[7]
Tao Dai;
Shahrrava, B.,” Variable
step

size NLMS and affine projection algorithms
with variable smoothing factor “
Circuits and
Systems, 2005. 48th Midwest Symposium on
,
Aug. 2005 ,
page(s):
1530

1532 Vol.
2
[8] J. H. Husoy and M. S. E. Abadi “Unified
approach to adaptive filters and their
performance”
IET Signal Processing
, vol. 2 ,
No. 2 , pp. 97

109, 2008.
[9] H. C . Shin and A. H. Sayed “Mean square
performance of a family of affine projection
algorith
ms”
IEEE Trans. Signal Processing
,
vol. 52,pp. 90

102, 2004.
0
50
100
150
200
250
300
25
20
15
10
5
0
5
MSE ERROR PLOT USING LMS
ITERATION NUMBER
MSE(db)
Enter the password to open this PDF file:
File name:

File size:

Title:

Author:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Preparing document for printing…
0%
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο