Research of a Fan Fault Diagnosis System Based on Wavelet and Neural Network

bannerclubΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 4 χρόνια και 21 μέρες)

90 εμφανίσεις

Research of a Fan Fault Diagnosis System Based on
Wavelet and Neural Network



Guang
-
zhong Cao
1

Xiao
-
yu Lei
2


Chang
-
geng L
u
o
3


Abstract

An online fan fault diagnosis system is proposed
based on wavelet and neural network,
and the system is
implement
ed

on the LabVIEW platform. Relying on the
noise signal from the fan, the r
ecognition

system utilizes
power spectrum
gravity center
, sound level, wavelet
frequency segment power of the
signal as feature vectors, and
the

BP network as
classifier

for fault d
iagnosis. The
experimental results show that it is effective to extract fault
information by the
combination

of wavelet and
neural

network. The entire system has
adaptability

and fault
-
tolerant
capability
.


Keywords
-

sound
pressure
level
,
power spectrum,
wavelet
,
BP network
,

fault diagnosi
s


I.

I
NTRODUCTION


The
fan
fault
s

produced by

complex mechanism

are
various
.
I
t is difficult to diagnose the
reasons of
fan fault
because of lacking the mapping relationship between
faults and symptom.
Current fault di
agnosis methods
for
rotating machinery
such as vibration detection,
temperature controlling and so on,
have to use
contact
measurement
. However, those methods cannot be
available

for many important field devices.

Either in time domain

or in frequency domai
n t
he

local
characteristics of wavelet are

good in
extract
ing

time
-
varying signal characteristics.

And the neural network has
a strong
capability

to
identify the
m
ulti
-
dimensional model

and non
-
linear model. Therefore
,

it is possible to improve
the accurac
y of diagnosis system
by combination of

wavelet
and

neural network.

In this paper,
the fan fault
diagnosis system based on the wavelet and neural network
is

designed
. It
us
es

the noise
produced by

fan
to
be as

the
diagnosis signal
and
adopts

non
-
contact

m
easurement.
This system is good in extracting features and adaptive
learning. Its diagnosis
results

are
credib
le.


II.

B
ASIC
T
HEORY


A.

Wavelet theory and Mallat algorithm

Both in time domain and
in
frequency domain,
wavelet
can
perform local analysis at the
same time. It decomposes
the signal into some independent frequency band. For fan
noise signal, the energy
variation

of different frequency
band
always
correspond

different fault. Therefore, by
calculating the signal energy of each frequency band the
chara
cteristic vector

of signal

can be extracted
. Mallat is
one of
the
fast
algorithms

of wavelet.

Contin
uous

wavelet transform of
a

function
is


Digital ref:
062

1

College of Mechatronics and Control Engineering, Shenzhen
University
,

S
henzhen.

E
-
mail: gzcao@szu.edu.cn

2

College of Mechatronics and Control Engineering, Shenzhen
University
,

Shenzhen.

E
-
mail: lxywy@139.com

3

Physics & Electronic Engineering College, Nanyang Normal
University,Nanyang
.

E
-
mail:
L918@163.com


defined by

[1]


(1)

w
here

,

and
s
atisfied

,which
is

as mother wavelet function
;

a


is scaled factor and

b


is translation factor.
i
s

the conjugate function of
.


The

w
avel
et family is described by

, here the discrete
format

of

a


and

b


are

and

respectively.

Thus wavelet
family

and wavelet coefficients can
be expressed by

and

respectively.
Therefore,
square integrable function

can be decomposed into linear summation of wavelet
family
and its coefficients
can be
cal
culated by wavelet transforming from binary flexing

at binary
position
[2]
.

One fast
algori
th
m

of wavelet is
Mallat. Its role in
wavelet is
the
same as FFT in Fourier transform.
Suppose

the discrete sample s
equence of the signal
is
expressed by
,
, if
is the
approximation of signal at scale
, marked by
. Th
en
, the decomposing
formula can be
described by




(2)


(3)

After reconstructing the formula, it can be described by





(4)

In formula (2) (3) (4)

[3]
,
, and N is
the length of input sequence;
is the approximate
coefficient

at
th

layer;
is the detail coefficient at
th

layer;
is the low
-
pass filter coefficient of the used
wavelet;
is the high
-
pass filter
coefficient
of the used
wavelet.


B.
Principle

of BP
network






The error back
-
propagation
neural

network
is
called BP
n
etwork. This algorithm makes the problem
between

training sample
input
s and target outputs
become

a
nonlinear optimization problem
. T
hen using gradient
descent method
obtains the weight
value
between nodes.
Actually,
it

reflects the mapping relationship be
tween the
input and output in the form of weight

value
.

The structural model of BP network
is shown in F
ig.1,
which is
consist
ed

of input layer, hide layer and output
layer.

Here, the input layer receives characteristic
parameter information; the hide lay
er
learned

and
processes input information and connects the input layer
and output layer by weight
[4]
; the output layer compares
with target value continually and propagates the error back.



Fig. 1:
The structure model of BP network

The weight from inpu
t layer to hide layer and
the
one
from hide layer to output are corrected continually by
continual forward propagation and back propagation

of the
training sample
.
Finally, the hidden inherent law of
input
samples is found. In the F
ig.1,
is input characteristic
parameter
;
is the number of input nodes;

is the
weight from input layer to hide layer;
is the output of
hide layer;
is the number of h
ide layer nodes;
is the
weight from hide layer to output layer;
is the output
signals
;
is the number of output layer nodes;
is the
target value of the network.

F
or
ward
propagation

process
can be described as
follows.

Input layer: Adopting the linear input function makes
any

output
equal to its input
;

Hide layer: Any input
signal
is the weighted
sum
mation

of forward outputs
,
, here
is the threshold of hide layer nodes. The output
is
;
can
use the

function,
that
is,
.

Output layer: the
weighted sum
mation

of hide layer
output
s

is
the
input of output layer
. Adopting linear output
function makes the
k
th
output

be
,

where

is an integ
ral

number
.

During back propagation

process
, the error function
is defined
by
;

if the output
error is not satisfied with the
requirement
, the network
propagates errors back to modify the weight.

For
correction

of the
w
eight
, the learning algorithm of
BP network adopts gradient descent method to
adjust the
weight value. Adjusted quantity is,

.

F
rom this formula we can obtain the weight correction
between hide layer and output layer
is
, here
is learning rate,
; the
weight correction between input layer and hide layer is
, here
.


C. The evaluating indictor of sound pressure leve
l


The pressure fluctuations in the media

are due to sound
disturbance

and its value of pressure over the original
static pressure is called sound pressure level.
For a period
of time
, the root mean square value of i
nstantaneous sound
pressure

is descr
ibed by

[5]
,



(5)

w
here
is instantaneous sound pressure,
is the time
interval,
is the equivalent sound pressure.


The sound pressure level is calculated by




(6)

w
here
the sound pressure level

is signed by
,
is the reference acoustic pressure,
is
the equivalent sound pressure. Weighting sound pressure
level can be obtained by A, B, C and D weighting network

respectively
.
It is well known that the A
weighting

scale

corresponds most closely to the response of the ear.



III.

D
ESIGN OF
D
I
AGNOSIS

S
YSTEM


A.

Structure of hardware system

It has been shown in F
ig.2 that the fan
under test

is
diagnostic target.
Microphone

converts the noise signal
produced by

fan into voltage signal.
The

amplifying circuit
enlarges the voltage signal to a sc
ope r
equired
and

the A/D
data

acquisition card is PXI
-
4472 made by NI
Company

for converting analogous signal to digital signal.

Then
the
digital signal is
read

by control in Lab VIEW and a
wavelet neural network (WNN) fault
diagnosis

platform
is
constructed in

Lab VIEW.


Fig.2. Structure of hardware system


B.

Design of software system

As shown in Fig.3, the software system is consisted of
training sample, network training and on
-
line diagnostics.
The function of
each

part is described as follows.

Training
sample
: r
elying on the target
under test

sets
frequency of sampling, number of samples, sample size,
state
of sampling
signal
such as normal

state
,
paper
choking
,

eccentric blade
,
blade breaks
, and so on, and then
acquires

a certain number samples of signal in d
ifferent





state. Finally, it calculates the characteristic vector of each
state and saves the vector to specified location.

The function of
n
etwork training: setting the node
number of input layer, hide layer and output layer;

initializing

the weights and

threshold values of each layer;
training network and saving the correlation parameter for
calling.

On
-
Line d
iagnostics: acquiring the noise signal of fan;
calculating the characteristic vector; inputting the
trained
network
; outputting and saving the fau
lt probability and
alarm message.



Fig.3.

Structure of software system


C.

Extracting the characteristic
vector

How to extract the characteristic vector and which
parameter should be selected
a
ffect the
performance

of
diagnosis s
ystem directly. In the proposed system, the
characteristic vector is consisted of A
-
weighted

sound
level, power
spectrum

gravity centre and
signal energy of
each frequency band after wavelet decomposition.


The power spectrum gravity center (FC) is ca
lculated
by



(7)

w
here
,
is
the frequency and
is magnitude.


If fault
occurs
, the magnitude
s

of some frequency
would chan
ge
and
a
ffect the position of power spectrum
gravity center, the energy distribution of
different

frequency band after wavelet decomposition and the
measuring
results

of A
-
weighted sound level.


IV.

E
XPERIMENTS




Four fans
RDM8025S

made by RUILIAN SC
IENC were
used

to do the fault diagnosis experiment
.
One fan is
normal
, the other three are paper

choking
,
eccentric blade
,
and
blade
break
s

respectively.


A.

The
network
structure

design


The characteristic vector

was

input of the netw
ork. The fault modes such as normal
,
paper choking
,
eccentric blade

and
blade
break
s
composed into the output
vector
. For example,
shows that the fan fault is
paper choking
.

The n
ode

number of input
layer equals

four which is

the
dimension

of
.
Similarly
, the node number of output
layer equals the dimension of
. And the node numb
er of
hide layer was two which is
empirical
.

Linear act
ion

function is
chosen

at input and output layer. The
action

function of hide layer is
the
function.


B.
Signal
acquisition



Setting the sampling frequency
was 5000
and
the
number of samples was 2048.
of Daubechies
was
used by wavelet and the decomposition level was four.

Under each one of four states, we acquired ten sets data
in
which two sets were s
elected to calculate the
char
acteristic

vector
. The results are shown in the table1.



For the
second
, fourth, sixth and eighth

sets of signal
samples,

their

power spectrum and the

position of centre
of gravity
are shown in

Fig.4. It has been shown that there
is obvious difference

in different

set.

From the
second

row
of table1, we can find that the difference of
power
spectrum
gravity center

is obvious too.


Wavelet
function
was used in the
second
, fourth,
sixth and eighth sets of signal samples to perfor
m
four

layer wavelet decomposition. The time domain waveform
of h
igh
-
frequency coefficient
s at the fourth and

third layer
are listed
in

Fig.5.
In the fourth and fifth row of table1, it
is obvious that the third and fourth layers


energy

of each
group signa
l
and each

energy ratio of high
-
frequency
coefficients
are all different with others.


C
.
Network training


To

keep

the
stable
learning rate and avoid network
oscillating
, the learning rate, weight and the error of target
should be selected suitably wh
ile executing the network
training.

If the learning rate is too speedy, there would be a
constant oscillation in the network making it difficult to
achieve the
target value; if the error of target is too small,
to satisfy requirement in specified iteration

number could
not be achieved.

The initial weight and threshold value were s
et

as

the
random number
between 0 and 1.

To set the error of target
as 0.05 and the maximal iteration number as 5000.


After the network was trained, the relationship between
act
ual output error and iteration number is shown in Fig.6.
In figure
(a)
, the learning rate is 0.7.
There

was a biggish
oscillation during the training and when the output error
achieved the
requirement

the iteration number
was

3822.
In figure (b), the learn
ing rate is 0.2. However, the

oscilla
-








Table
1:

C
haracteristic value

of training sample acquired and the corresponding fault modes



Number
of
Samples

x1

FC


x2
(
dBA
)

x3(d4)

x4(d3)

Mode

1

200.170

78.350

46.306

27.273

N
ormal

2

190.829

77.913

44.669

23.890

N
ormal

3

300.180

83.114

61.277

32.537

P
aper
choking

4

295.919

82.746

59.160

33.841

P
aper
choking

5

132.724

80.684

65.717

28.857

E
ccentric

blade

6

134.868

81.450

83.176

38.774

E
ccentric

blade

7

113.926

74.022

31.793

18.459

B
lade
breaks

8

122.163

75.550

36.515

19.621

B
lade
breaks





a The
second

group

b the fourth group





c The

sixth group d The eighth
group

Fig.4:

Power spectrum and the gravity centre position of sam
ples







a. The

fourth layer

high
-
frequency coefficients

of the second
group
sample



b. The

third layer

high
-
frequency coefficients

of the second

group

sample






c. The

fourth layer

high
-
frequency coefficients

of the fourth
group sample d. The

third layer

high
-
frequency coefficients

of the fourth group sample





e. The

fourth
layer

high
-
frequency coefficients

of the
sixth
group sample f. The

third
layer

high
-
frequency coefficients

of the

sixth

grou
p sample





g. The

fourth layer

high
-
frequency coefficients

of the
eighth

group sample h. The

third
layer

high
-
frequency coefficients

of the
eighth
group sample

Fig.5:
The third and fourth high
-
frequency coefficients of samples after wavele
t
decomposing








(a) Learning rate is 0.7
(b)

Learning rate is 0.2


Fig.
6
:
Relationship between

output error and iteration number


Table
2:
F
ault modes

of training
samples and outputs of network

Number
of
samples

Fault modes

Outputs of network

t1

t2

t3

t4

y1

y2

y3

y4

1

1

0

0

0

0.9695

0.0104

0.0002

0.0005

2

1

0

0

0

0.9769

0.0087

0.0001

0.0009

3

0

1

0

0

0.0567

0.9804

0.0007

0.0000

4

0

1

0

0

0.0441

0.9754

0.0021

0
.0000

5

0

0

1

0

0.0003

0.0000

0.9921

0.0054

6

0

0

1

0

0.0005

0.0000

0.9434

0.0236

7

0

0

0

1

0.0268

0.0000

0.0325

0.9194

8

0

0

0

1

0.0299

0.0000

0.0146

0.9858


Table
3
:

C
haracteristic
parameter
of training sample acquired and
outputs of network

T
raining

sample

Characteristic parameter

Outputs of network

X
1

FC


x2
(
dBA
)

x3(d4)

x4(d3)

y1

y2

y3

y4

1

(
Normal
)

221.104

79.039

46.127

24.898

0.976

0.047

0.001

0.000

2


Paper
choking


273.696

81.797

54.367

31.075

0.016

0.980

0.005

0.000

3


eccentric

blade


135
.684

78.717

60.857

26.812

0.003

0.060

0.994

0.010

4

blade
breaks


110.927

74.364

31.303

17.032

0.002

0.000

0.044

0.997

tion during the training was small. When the output
error
was down from 31.2 to 0.049997 and achieved the
requirement
, the iteration n
umber was 3677. The fault
mode of training sample (t1
, t2,

t3,

t4
) and the
actual

output of network (y1, y2, y3, y4) are listed in Table2. It
can be seen from Table2 that the fault mode of
acquired

samples has already been
recognized

accurately by the
net
work. The error is less than 0.1. At last, the related
parameters of this network were stored in the specified
location.


D
.
On
-
line diagnostics

The well trained network was used to diagnose the fan
under test

to

acquire
the noise signal
produced by fan
working at each mode. Then
using the sampling value

calculate
d

the corresponding characteristic vector

which
was

input
to

the well trained network in order to
perfor
m
on
-
line diagnostics.

Table3 lists the characteristic
pa
rameters and the corresponding ne
twork output. From
Table3, we can find that although the characteristic
vectors of fan under test are different with the training
samples,
the proposed system diagnoses accurately.




V.

C
ONCLUSION


In the proposed intelligent fault diagnosis system,
the
n
oise produced by fan was diagnosis signal
;

non
-
con
tact

measurement

was adopted

and
using the wavelet neural
network performed the non
-
linear mapping from feature
space to
fault

space.

Modular programming is adopted
in
this system
, so it is easier to extend

and to change the
characteristic parameters of fault and structure parameters
of the network. Utilizing the learning, memory and
reckoning
abilit
ies diagnoses the fault adaptively.








V
I
.

A
CKNOWLEDGEMENTS

The

authors

would

like

to

thank

the research
funds:

2006AA040105
,

2007BAF15B01
,

2007BAF15B03
,

2008A011400006
,
2007B090400056
, and Shenzhen
government

fund,

for

their

support
.


V
II
.

R
EFERENCES


[1]

S.
Mallat
.
A theory for multiresolution signal
decomposition: the wavelet representation
.

IEEE Pattern
Anal. and M
achine Intell.,
1989,

11
(
7
):

674
-
693.

[2]

Heji Yu, Changzheng Chen and Sheng Zhang
: '
Intelligent
diagnosis based on neural networks
' (
Metallurgical Industry
Press. Beijing
, 2000
: 75
-
78,108
-
110
).

[3]

Olivier Rioul
.

Fast Algorithms for Discrete and Continuous
Wavelet Transforms
.

IEEE Transactions on Information
Theory, 1992
,

38
(
2
):
569
-
586.

[4] Aiyuan Liu
:

Fault Diagnosis of Plane Electric Starting
System Based on BP Neural Network
.
Chinese Journal of
Scientific Inst
rument
, No.6, 2002: 682
-
686.

[5]

Shaodong Zhang, Jiaqing Sun
:

'
Principle and Application of
Sound Level Meter
'

(
Measurement Press
. Beijing
, 1986:28
-
31)