A TS Fuzzy-Neural Network with Dynamic Consequences ...

sciencediscussionΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 10 μήνες)

82 εμφανίσεις


*
Supported by the Visiting
Scholar

Foundation

of Shanxi Province P. R. China





1

A Fuzzy Neural Network Based on the TS Model with Dynamic Consequent Parameters and its
Application to Steam Temperature Control System in Power Plants*


Keming Xie

Department of Automation

Taiyuan University of Technology

Taiyuan, Shanxi, 030024, P.R.Chi
na


kmxie@tyut.edu.cn

T. Y. Lin

Department of Mathematics
and Computer Science

San Jose State University

San Jose , CA 95192, USA

tylin@cs.sjsu.edu

Jianfeng Nan

Department of Automation

Taiyuan University of Technology

Taiyuan, Shanxi, 030024, P.R.China


ABSTRACT

This paper

presents

a new fuzzy
-
neural network based
on the Takagi
-
Sugeno(TS) model with dynamic
consequent parameters. In the first step, this network
adopts the least
-
square method for rough
-
tuning the
consequent parameters; this is an off
-
line

processing. It
then in the second step employs the error
back
-
propagation to fine
-
tune the consequent
parameters, which is on
-
line. The fusion of fuzzy logic
and neural network enabling us to captures the physical
meaning in the model. In summary, the app
roach is

a

semantic oriented approximation of non
-
linear maps;
the optimization of the parameters is fast and efficient.
The network is applied to the cascade control system of
superheated steam temperature in power plants. The
approach is simulated in MAT
LAB. The simulation
shows the method is effective; fast in response,
minimal in overshoot, and robust.


Key words

Fuzzy neural
-
network, TS model, Cascade
control, Superheated steam temperature



1. Introduction


There are two characteristics in the fuzzy
inference rule
model [1] presented by Takagi and Sugeno (short for TS
model). The first one is that all rules in the model are
expressible by linear equations. This fact allows us to
express the global output of the model in a succinct
mathematical express
ion. So the classical linear control
method can be easily employed to design the non
-
linear
controller. The second one is that the partitioning of the
antecedent of the inference rules depends on whether
there is a local linear relation between the input a
nd
output. This makes it easy to use

linear model
of region
step
-
down
to describe complex global dynamic
characteristics when there is a major change in

operation
.


By combining fuzzy systems with neural networks and
making full use of the complimentary na
ture of two
approaches, fuzzy neural networks are applied to
intelligent control. The essential idea is that the
mechanisms of fuzzy systems are transformed to the
corresponding structures of neural networks [2].
Inference methods of fuzzy systems are tran
slated to
neural networks. By using the learning capability of
neural networks, auto
-
adjustments of antecedent and
consequent parameters can be achieved. The major
advantage of this method is that when the target
information is not adequate, the informatio
n of past
experience can be used to structure the neural network.
Using the capability of learning by examples in neural
networks, the fuzzy relationships between the input and
the output can be captured, revised and summarized.


Least square method is fun
damental in the classical
identification theory and is widely used. Both the

one
-
time
-
completing algorithm
and recursive
algorithm
can easily be realized in engineering. Its obvious
advantage is strong robust.
The
back propagation (BP)
learning algorithm c
an effectively revise the weights and
thresholds
of
hidden nodes
.
The feed forward neural
networks (FFNF) presented by the authors in reference [3]
focused on studying the problems of the topological
structure of a network. The present paper will stress th
e
algorithms of training networks and will present a fuzzy
TS
neural network with a dynamic consequent parameters
(DFNN)
. In this paper, the least square method combined
with the BP is used to train the networks. The new method
is effective; it not only ov
ercomes the drawbacks but also
takes the advantage of the merits of the two methods. First,
this network

employs the least square method
for
rough
-
tun
ing

the consequent parameters, which is off
-
line.
Then, it employs the back
-
propagation method
for
fine
-
tu
ning the consequent parameters, which is on
-
line.
This
method
has

captured

physical meaning in models
and achieved an excellent fusion between fuzzy logic and
neural network
.
T
his method is a powerful semantic
o
riented approximation of non
-
linear maps; the

optimization of parameters is fast and efficient


2. Topological structure of DFNN


The rules of TS
fuzzy
model can be expressed as follows


R
i
: if x
1
(k) is A
1
, x
2
(k) is A
2
, x
3
(k) is A
3










i =1, 2,

, R


(1)



where

R

is the number of rules in the TS fuzzy model
,

*
Supported by

the Visiting
Scholar

Foundation

of Shanxi Province P. R. China




2

x
1
(k), x
2
(k), and x
3
(k)

are three input variables,

is the
output of the i
-
th rule.

and

are fuzzy

subset of
x
1
(k), x
2
(k), and x
3
(k)

respectively, whose parameters of
membership function are called antecedent parameters,
the coefficients and the constants in equation (1) are
called consequent parameters
.
The number of fuzzy
granulations,
x
1
(k), x
2
(k),

and x
3
(k),

is determined by
jointly the complexity
and precision
of the model.


Suppose a group of input vector (
x
1
(k), x
2
(k), x
3
(k)
) is
given, then the global output y in the TS fuzzy model can
be obtained by the weighted average of the output

as
follows





(2)

where
is determined by the conclusive equation of the
i
-
th rule,

is
the weight of the firing strength layer
to
the i
-
th rule of the inpu
t vector, which is calculated by the
equation
(3)



(3)

where

represents the fuzzy
minimizing
.


In order to realize the smooth connection of a local linear
input
-
output re
lation in a fuzzy subs
pace
, TS fuzzy model
uses

the fuzzy
logic
inference based on the fuzzy
granulation of the input space [3]. So the ability of
describing nonlinear characteristics in the model depends
mainly on the granulation method and the precision
in the
input space.


The structure of DFNN network is shown in Fig.1 below,
which consists of 5 layers.


(a) Input layer: Input layer transforms the input vector
s

to
the next layer,
and

the i
-
th neur
on

is relative
to the i
-
th
element of the vector
s
, i=1,
2,…, n, where n is the
dimension of the input vector
s
.


(b) Fuzzy layer: the function of the fuzzy layer is similar
to the one of fuzzy logic controller (FLC). Because every
node in the previous layer responds to

nodes, the
number o
f nodes in the fuzzy layer is

and every
node has an action of membership function. In this paper,
the Gaussian membership function is employed. There is
a physical meaning to every node which represents a
fuzzy subset
that is
a ling
uistic variable such as NL, NM,
NS, NZ, PZ, PS, PM, PL and so on. The antecedent
parameters consist of the mean value and deviation in the
membership function.

is the number of the fuzzy
partition of the i
-
th input node in (a) layer
.





(c)
Firing strength
layer: Every rule adaptation
grade
is
calculated in this layer. The number of nodes is
respondent to the total number N
w

of rules. A neur
on

node has a function of the fuzzy logic and

computing
. If

i
(
t) represents the adaptation
grade
of the i
-
th rule, one
has




i
(t) = min {

mj
(t), …

kl
(t)}



(4)


where i=1,

2,

…,N
w

; m,


k=1,

2,
…,
n, m
k; j=1,

2,

,
N
m
;
l
=1,

2,
…,N
k
;
N
w
=
.


(d) Normalized layer: Normali
zing calculation is carried
out in this layer
:



(5)


(e) Linear combination layer:



u
i
(t) = a
i
x
1
(t) + b
i
x
2
(t) + c
i
x
1
(t)


is the consequence of every node, which is determined by
the input vector and
consequent parameters a
i
, b
i
, and c
i


*
Supported by

the Visiting
Scholar

Foundation

of Shanxi Province P. R. China




3

w
hich

are evaluated by the learning mechanism. The
neuron in this layer has only one node which acts as a
linear weighted sum. The output is control function:




(6)

3. DFNN learning algorithm


In this paper, the control tactics is as follows: data
collected by the
cascade
control system is used to rough
tune the parameters of a network

off
-
line
. A group of data
is collected on line and

learn to make the
consequent parameters satisfy the demands on the index
of the performance
.


3.1 Rough tuning


In simulating the cascade control system,
Collect
p

group
of data (e,

ce,

T
×
e,

u)
,

that are teacher signals

to train
network, where e, ce,

T
×
e

and u are the

error
, the change
rate of the
error
, the
error
integral, and control
a
ction. The
matrix equation is
Ax=B,
where
x
is the vector composed
by all consequen
t

parameters.
Where

the dimension of
x

is
n
×
1,the dimension of
A
and
B

are

P
×
n and
P
×
1
re
spectively.
The
least square method is employed to
minimize ||
Ax
-
B
||
, then the least square estimation :



(7)

where


is the pseudo inverse of

A

(
must
be non
-
singular)
.

Because there is a large computation in
the equation above and it becomes an ill
-
conditioned one
when
is singular, a recurrence formul
ae is
employed to calculate the least square solution of
.
Suppose
represents the i
-
th row vector of the matrix
A
and

represents the i
-
th element in the matrix
B
,
one
has [3]


,

,


(8)

where

is matrix covariance, the least square solution
x* is x
p
. The initial values of the consequent parameters
can be determined in advance according to

experie
nce
values of the controllers.


The initial values of the matrix of covariance

can
be determined as follows


S
o

= r I



(9)

where r is a larger positive number,
I

is a
n
×
n

unit
ma
trix
.

After
p

group of data being trained, the rough
tune values of the consequent parameters can be
obtained

and put its inquiry

library


3.2
. F
ine

tuning


The integral square
-
error criterion is adopted as
follows:




(10)

and




x
1
(k)= e(k), x
2
(k)= Te(k), x
3
(k)=(e(k)
-
e(k
-
1))/T (11)


(13)


(14)












where T is sampling period,
k
is the sampling

moment
l

is the number of learning iteration
x
1
(k), x
2
(k),
and

x
3
(k)

are the error, the error integral and the error differential
signal
s

respectively

a
i
(l), b
i
(l) and c
i
(l)
are coefficients
of the rule consequence
.


Particularly, the teacher signal
s

her
e
are

different from
that in rough tuning. The former are the mapping
relation between the

error
, the change rate of error, the

error

integral and the expected output of the closed
loop
control
system. One often cannot determine the
expected control tactic
s to the given the deviation, the
change rate of deviation, and the deviation integral, but
he can presents the expected output response curve. So
in the paper
(x
1
, x
2
, x
3
, r)

is selected as the teaching
signal, where r is the expected output of the
cascad
e
control
system in order to make the characteristics of
the
DFNN controller
network better. In this process
there is a error propagation from y to u. For the sake of
simplification in simulation, one doesn’t consider any
particular mathematical model inst
ead of the
approximate expression below


(15)


where

and
are the con
se
quence linear
function value and rule grade.
In order to prevent the
denominator from being zer
o when
u(k)=u(k
-
1)
one
take



(16)

The alternate is feasible because the equation (16) is
equality. Evaluated criterion can be given as function
(10)
.


If
J

is less than 0.05 directly apply the model without
any learning
. Otherwise a learning is carried out. In
general, the learning

times are

3

5

which
are
related
to the

sampling

period and the learning

rate
.

Because
the learning process is always controlled within one

*
Supported by

the Visiting
Scholar

Foundation

of Shanxi Province P. R. China




4

sampling
period, the time of the learning will
have
an
upper value. That is, although the maximal time of the
learning has been
reached.
J
is still larger than 0.05. At
this moment, the parameters are improved to some
extent and the controller can satisfy the demand on the
quickness of manufactory. The learning of the
antecedent parameters in which the intransitive error
algorithm
was used is discussed in detail in reference
[3].


4. S
uperheater steam temperature
cascade control system


The superheated steam temperature is an important
index in operation of monoblock unit in the power
plants. It has an important thing with the heat
efficiency of a monoblock unit and it will heavily
affect the safe and economic operation in the power
plants. Generally, the superheated steam temperature is
controlled at 540+(5/
-
10)

.

That is, 530~545


is
suitable and
reasonable
. Because the superheated

steam temperature object has a large inertia and a
larger delay time, so how to control superheated steam
temperature efficiently is a point for attention. At
present the typical control system pattern of the
superheated steam temperature is the cascade c
ontrol
system which employs the desuperheating water as the
manipulated variable.


The
cascade
control system of the steam temperature is
shown in Fig.2, where

the main
controller

employs

the
DFNN algorithm and vice controller adopts the PI
algorithm. The

steam temperature object is separated
into two
fields
.

is the prior field and

the inertia field
. And




(17)



(18)






5
.

Simulation


This cascade control system with DFNN algorithm is
simulated in MATLAB. Fig.3 shows the comparison of
two step responses, in which the response 1 and
response 2 represent DFNN algorithm (as main
controller) and

PID algorithm (as main controller)
respectively. It is seen from the Fig.3 that the control
performance is improved obviously when DFNN
instead of PID is employed. The former has a zero of
overshoot
and the later 0.0954. Fig.4 shows
comparison of the abil
ities to reject disturbance. In
simulation the disturbance

and

when they are entered in 500 second. It can
be seen from Fig. 4 that DFNN has a better ability to
reject disturbance than the traditional PID alg
orithm.




Fig.3 T
he comparison of step responses
for DFNN and
PID



Fig.4 The comparison of abilities of reject disturbances
for DFNN and PID







*
Supported by

the Visiting
Scholar

Foundation

of Shanxi Province P. R. China




5

6. Conclusion



The DFNN algorithm presented in this paper
combines the classical least square method with BP
algorithm, not only employs the strong robust of least
square and the clear conception and the precision of BP
algorithm but also
overcomes their drawbacks. The
drawbacks existed in the network is a slower
calculation. Although the DFNN can be employed to
improve the control quality in the serial steam
temperature cascade control system. The simulation
shows the method is effective,
fast in response,
minimal overshoot, and robust.


References

[1].Takagi, M.Sugeno. Fuzzy Identification of Systems and Its
Applications to Modeling and Control. IEEE Transactions on
Systems, Man, and Cybernetics, Vol.SMC
-
15, No.1(1985):
116~132

[2] Xie Ke
ming, Zhang Jianwei. A Linear Fuzzy Model
Identification Method Based on Fuzzy Neural Networks.
Proceedings of 2
nd

World Congress on Intelligent Control and
Intelligent Automation Conferences (CWCICIA’97), vol.1:
316~320

[3] Xie keming, Nan jianfeng A Fast

Fuzzy
-
Neural Feedback
Network and its Application in Modeling. Proceedings of
ICAIE’98, 1998, 499~502

[4] Zhang Yuduo, Wang Manjia. Thermotechnical Automatic
Control System. Beijing: Press of Hydroelectric, 1987: 201
-
203