National Taiwan University
Advanced Digital Signal Processing
Term Paper (Tutorial)
Non

Linear Time
V
ariant
System Analysis
系所：
電子
工程研究所
年級：
碩士班
一年級
學號：
R
01943018
姓名：
黃乃珊
~
2
~
Non

Linear Time
V
ariant System Analysis
V
ersion 1
“
Non

Linear Time Invariant System Analysis
”
R
98942048
黃信凱
R01943018
黃乃珊
Abstract
The equations governing the behavior of dynamic systems are usually nonlinear.
Even in cases where a linear
approximation is justified, its range of validity is likely
to be limited.
The engineer faced with the design or operation of dynamic systems,
especially the control engineer, must understand the various modes of operation that a
system may exhibit. Usuall
y, a system is designed to yield operation in a certain mode
and, at the same time, suppression of some other modes. A typical example is the
design of a servo exhibiting asymptotic stability of the response to every constant
input but which cannot go into
self

oscillations. Unlike
linear systems, nonlinear
systems can exhibit different behavior at different signal levels. The fact that a system
is nonlinear, however, may not necessarily constitute a disadvantage. Nonlinearities
are frequently introduced to
yield optimal performance in a system. It is the objective
of this tutorial to discuss some of the fundamental properties of nonlinear systems and
to illustrate some of the inherent problems, as well as considerations needed when
dealing with the analysis
or design of non

linear time invariant systems.
Keywords
Nonlinear time

varying
system,
phase

space, stability analysis, approximate
method, describing function, Krylov

Bogoliubov asymptotical method.
~
3
~
Contents
Abstract………………………………
…
……………………..2
1.
Introduction
…………………………………………
...
……
4
2.
Introduction to Analysis of Nonlinear System…
………
…6
3.
Approximate Analysis methods
……………
……
..
………10
4.
Stability of Nonlinear Systems
…………………………
..17
5.
The Applications
…………………………
………………..
2
5
6.
Conclusion
…………………………………………
………
31
7.
References
………………………
…………
…
……………
32
~
4
~
1.
Introduction
Every system can be characterized by its ability to accept an input such as
voltage, pressure, etc. and to produce an output in response to this input. An example
is a filter whose input is a signal corrupted by noise
and interference and whose
output is the desired signal. So, a system can be viewed as a process that results in
transforming input signals into output signals.
First of all, we review the concept of systems by discussing the classification of
systems acc
ording to the way the system interacts with the input signal. This
interaction, which defines the model for the system, can be linear or nonlinear,
time

invariant or time varying, memoryless or
with memory, causal or noncausal,
stable or unstable, and dete
rministic or nondeterministic. We briefly review the
properties of each of these classes.
1.1
Linear and Nonlinear Systems
When the system is linear, the superposition principle can be applied. This
important fact is the reason that the techniques of
linear

system analysis have been so
well developed. The superposition principle can be stated as follows. If the
input/output relation of a system
is
x(t)

> y(t), αx
1
(t)+βx
2
(t)

>αy
1
(t)+βy
2
(t)
Then the system is linear. So, a system is said to be nonline
ar if
this equation
is not
valid.
Example
：
Consider the voltage divider shown in Figure 1 with R
1
=R
2
. For input
x(t) and output y(t), this is a linear system. The input/output relation can be written as
2 1
( ) ( ) ( )
1 2 2
R
y t x t x t
R R
On the other hand, if R1 is a voltage

dependent resistor such that R1=x(t)R2, then the
system is nonlinear. The input/output relation in this case can be written as
2 ( )
( ) ( )
( ) 2 2 ( ) 1
R x t
y t x t
x t R R x t
~
5
~
Figure1

1
1.2
Time

Varying and Time

Invariant Systems
A system
is said to be time

invariant if a time shift in the input signal causes the
same time shift in the output signal. If y(t) is the output corresponding to input x(t), a
time

invariant
system will have y(t

t
0
) as the output when x(t

t
0
) is the output. So, the
rule used to compute the system output does not depend on time at which the input is
applied.
On the other hand, if the system output
y(t

t
0
)
is not equal to
x(t

t
0
)
, we call
this system time variant or time varying.
There are many examples of time

varyin
g system. For example, aircraft is a time
varying
system. The time variant characteristics are caused by different configuration
of control surfaces during takeoff, cruise and landing as well as constantly decreasing
weight due to consumption of fuel.
1.3
Sys
tems With and Without Memory
For most systems, the inputs and outputs are functions of the independent
variable. A system is said to be memoryless, if the present value of the output depends
only on the present value of the input. For example, a resistor i
s a memoryless system,
since with input x(t) taken as the current and output y(t) take as the voltage, the
input/output relationship is y(t)=Rx(t), where R is the resistance. Thus, the value of
y(t) at any instant depends only on the value of x(t) at that
time. On the other hand, a
capacitor is an example of a system with memory.
~
6
~
1.4
Causal System
A causal system is a system where the
output
depends on past and current inputs
but not future inputs.
The
idea that the output of a function at any time depends only
on past and present values of input is defined by the property commonly referred to as
causality. A system that has some dependence on input values from the future is
termed a non

caudal or acaus
al system, and a system that depends only on future
input values is an anticausal system. Classically, nature or physical reality has been
considered to be a
causal
system.
1.5
Linear Time Invariant Systems
We have discussed a number of basic system properti
es. Linear time

invariant
systems play a fundamental role in signal and system analysis because of the many
physical phenomena that can be modeled. A linear time

invariant(LTI) system is
completely characterized by its impulse
response h(t) and o
utput y(t)
of
a LTI system
is the convolution of the input x(t) with the impulse response of the system
：
( ) ( ) ( ) ( ) ( )
y t x t h t x h t d
1.6
Nonlinear Time Invariant Systems
With nonlinear systems, we cannot count on the above nice properties.
The
nonlinear time invariant system is a system, whose operator is time

invariant but
depends on the input. For example, a square amplifier is nonlinear time invariant
system, provided
y
(
t
) =
O
[
x
(
t
)]
x
(
t
) =
ax
2
(
t
)
.
Other examples are rectifiers,
oscillators,
phase

looked loops (PLL), etc. Note that all real electronic systems become
practically nonlinear owing to saturation.
Because of the difficulties involved in
nonlinear analysis, approximation methods are commonly used.
~
7
~
2.
Introduction to Anal
ysis of Nonlinear System
Nonlinear systems with either inherent nonlinear characteristics or nonlinearities
deliberately introduced into the system to improve their dynamic characteristics have
found wide application in the most diverse fields or engineeri
ng. The principal task of
nonlinear system analysis is to obtain a comprehensive picture, quantitative if possible,
but as least qualitative, of what happens in the system if the variables are allowed, or
forced, to move far away from the operating points.
This is called the global, or
in

the

large, behavior. Local, or in

the

small, behavior of the system can be analyzed
on a linearized model of the system.
So, the local behavior can be investigated by rather general and efficient linear
methods that are
based upon the powerful superposition and homogeneity principles.
If linear methods are extended to the investigation of the global behavior of a
nonlinear system, the results can be erroneous both quantitatively and qualitatively
since the nonlinear chara
cteristics may be essential but the linear methods may fail to
reveal it. Therefore, there is a strong emphasis on the development of methods and
techniques for the analysis and design of nonlinear system.
2.1
The Phase

space approach
The phase

space, or more
specifically the phase

plane, approach has been used
for solving problems in mathematics and physics at least since Poincare. The
approach gives both the local and the global behavior of the nonlinear system and
provides an exact topological account of al
l possible system motions under various
operating conditions. It is convenient, however, only in the case of second

order
equations, and for high

order cases the phase

space
approach is cumbersome to use.
Nevertheless, it is a powerful concept underlying
the entire theory of ordinary
differential equations (linear or nonlinear, time varying or time invariant). It can be
extended to the study of high

order differential equations in those cases where a
reasonable approximation can be made to find an equivale
nt second

order equation.
However, this may lead to either erroneous conclusions about the essential system
behavior, such as stability and instability, or various practical difficulties such as time
scaling.
~
8
~
2.2
The stability analysis
The stability analysis
of nonlinear systems, which is heavily based on the work
of Liapunov, is a powerful approach to the qualitative study of the system global
behavior. By this approach, the global behavior of the system is investigated utilizing
the given form of the nonlin
ear differential equations but without explicit knowledge
of their solutions. Stability is an inherent feature of wide classes of systems, thus
system theory is largely devoted to the stability concept and related methods of
analysis
.
Stability analysis,
however, does not constitute a complete satisfactory theory for
the design of nonlinear systems. The stability conditions, which are often hard to
determine, are sufficient but usually not necessary. This comes from the fact that the
given equations are re
formulated for the application of the stability analysis. In that
reformulation certain information about the specific system characteristics is lost and,
unfortunately, the amount of information th
at is lost cannot be estimated.
For example, if a nonline
ar system is found to be stable for a certain range of
parameter values, it is not possible to predict how far from that range the parameter
value can be chosen without affecting the system stability. Furthermore the system
can be unstable and still be sat
isfactory for practical applications. For example, a
system can exhibit stable periodic oscillations and therefore be unstable. However, in
the application of the system, these oscillations may not be observed because their
amplitude is sufficiently small
and the perturbations permanently acting on the system
are large enough to drive the system far from the periodic oscillations.
2.3
Approximate methods
Approximate methods for solving pr
oblems in mathematical physics
have been
received with much interest by
engineers and have promptly obtained wide diffusion
in diverse fields of system engineering. The basic merit of approximate methods
consists in their being direct and efficient, and they permit a simple evaluation of the
solution for a wide class of proble
ms arising in the analysis of nonlinear oscillations.
The application of computer techniques and system simulations has given strong
emphasis to those approximate methods which employ rather straightforward and
realizable solution procedures and calculati
ons. These methods enable a simple
~
9
~
estimation of how different system structures and parameters influence the salient
system dynamic characteristics. The application of a computer simulation can then
provide the actual solution of the design problem. If th
e system behavior is not
satisfactory, or if the computer solution does not agree with predicted characteristics,
the approximate methods can again be applied to guide the next step in the system
simulation and also achieve a better solution of the analysi
s problem, If we
interchange these two steps

that is, apply the approximate methods and then the
computer simulation

the design converges eventually to a final satisfactory solution,
This philosophy in the analysis of nonlinear systems can give improve
d results not
only in a specific system but also in the related class of systems, and thus has an
important generality in system theory and application.
It is of particular significance to classify the nonlinear problem before a specific
technique is appl
ied to its solution. Thus it is necessary to evaluate the potential of
both the exact and the approximate methods before they are tested on the actual
problem. This involves engineering experience and ingenuity in choosing the
appropriate design technique
and procedure. If an exact method is to be applied, we
should be aware of the fact that it may require that a sequence of simplifications be
introduced in the original problem.
In the simplifications, certain vital characteristics of the original problem
can be
lost

for example, the reduction of the order of a differential equation through neglect
of one of the system parameters. Then the approximate solution of the original
problem may represent more appropriately the actual situation and be of more us
e in
the design. In addition, the approximate methods normally yield more information
about the possible performance criteria trade

offs or the structural and parameter
changes that might enhance the overall system characteristics.
On the other hand, the
exact methods can reveal various subtle phenomena in
nonlinear system behavior that cannot be discovered by the approximate methods. It
can be concluded that in a majority of practical problems both the exact and the
approximate methods should be applied
to obtain a satisfactory solution of the
nonlinear system design problem, and the versatility of the designer in various
solution procedures is a prerequisite for a successful system analysis and design.
In the application of the approximate methods, a significant problem is the
estimation of their accuracy. A certain degree of accuracy is necessary to guarantee
the applicability of the method involved, and to ensure the validity of both the
~
10
~
qualitative co
nclusions and the quantitative results obtained by the approximate
analysis. The accuracy problem, however, involves various mathematical difficulties,
and the designer is forced to use simple and practical approximate methods despite
some pessimism about
the validity of the methods and promising results have been
obtained
in solving the accuracy problem
.
Among the approximate methods used for the analysis of nonlinear oscillations,
the Krylov

Bogoliubov asymptotical method stands out because of its useful
ness in
system engineering problems. The original method not only enables the determination
of steady

state periodic oscillations, but also gives in evidence the transient process
corresponding to small amplitude perturbations of the oscillations. The latt
er is of
particular interest in system design, where the transient process is often the ultimate
goal. However, the method is applicable to systems described by second

order
nonlinear differential equations.
The approximate method to be used in the analys
is of nonlinear systems along
with the parameter plane concept is the harmonic linearization method, often called
the describing function method or the method of harmonic balance. The harmonic
linearization is heavily based on the Krylov

Bogoliubov approac
h, and be applied to
nonlinear systems described by high

order differential equations.
3.
A
pproximate Ana
lysis
M
ethods
for Nonlinear System
In this section, we will present several methods for approximately analyzing a
given nonlinear system. Because a
closed

form analytic solution of a nonlinear
differential equation is usually impossible to obtain, methods for carrying out an
approximate analysis are very useful in practice. The methods presented here fall into
three categories
：
Describing function me
thods consist of replacing a nonlinear element within
the system by a linear element and carrying out further analysis. The utility
of these methods is in predicting the existence and stability of limit cycles, in
predicting jump resonance, etc.
Numerical
solution methods are specifically aimed at carrying out a
numerical solution of a given nonlinear differential equation using a
computer.
~
11
~
Singular perturbation methods are especially well suited for the analysis of
systems where the inclusion or exclusion
of a particular component changes
the order of the system. For example, in an amplifier, the inclusion of a stray
capacitance in the system model increases the order of the dynamic model by
one.
The above three types of methods are only some of the many v
arieties of
techniques that are available for the approximate analysis of nonlinear systems.
Moreover, even with regard to the three subject areas mentioned above, the
presentation here only scratches the surface, and references are given, at appropriate
p
laces, to works that treat the subjects more thoroughly.
3.1
Mathematical Description of Nonlinear Systems
In general a nonlinear system consists of linear and nonlinear elements. The
linear elements are described by linear differential equations. The
nonlinear elements,
which are normally very limited in number, are described by a nonlinear function or
differential equation
relating the input and output of the element. The n
onlinear
input

output relationship can have a rather arbitrary form. The parame
ter plane
analysis to be presented is restricted to a certain class of these relationships.
In treating a real system as linear, we assume that the system is linear in a certain
range of operation. The signals appearing in various points of the system are
such that
the superposition principle is justified. However, if signals in the system go beyond
the range of linear operation and, for example, become either very large or very small,
the characteristics of the system elements can be essentially different
form the
linearized
characteristics and the system must be treated as nonlinear. Such cases are
illustrated graphically by the characteristics shown
in Figure2
, where x denotes the
input to the element and the output is given by the value of the function
F(x). If the
output of the element is denoted by y, the input

output relationship can be written
analytically as
( )
y F x
~
12
~
Figure
3

1
Certain nonlinear characteristics can be given in analytical form. For example,
the characteristic of Figure 2 can be analytically described by
, 
( )
, 
kx x S
F x
csignx x S
The characteristic is linear with slope k=c/S for inputs less than S, and it exhibit
s
saturation for input magnitudes greater than S.
In various practical applications the nonlinear characteristic is obtained
experimentally, and an adequate analytical expression cannot be justified. On the
other hand, some characteristics are convenientl
y expressed analytically, whereas a
graphical interpretation is not possible.
Now, only single

valued nonlinear characteristics have been discussed; in the
characteristics of Figure2, to each value of the input x there is one and only one value
of the out
put y=F(x). The characteristics in Figure3, which have a hysteresis loop, are
multi

valued nonlinear characteristics.
~
13
~
Figure3

2
The hysteresis property can be such that the loop dimensions depend on the
magnitude of the input signal. It is also to be
noted that hysteresis type nonlinear
characteristics cannot be completely described by the function y=F(x) since the output
y inherently depends on the direction of change
in the magnitude of the input x. If the
rate of change in x is greater than zero, th
e right

hand side loop represents the
nonlinear characteristic, and vice versa. Thus the adequate description of the
hysteresis type of nonlinearities should be expressed as
(,),
d
y F x signsx s
dt
Rather than as y=F(x).
Figure
3

3
Besides the analytic
al description of nonlinear elements and systems, it is
essential to consider the structure of the system, which is usu
ally given in familiar
block diagram or signal flow graph form. The structure of the system displays certain
inherent features of nonline
ar systems that are not apparent in the analytical
description.
~
14
~
The basic nonlinear system with one nonlinear element n is shown in Figure4. It
should be noted that the function F(x, sx) associated with n does not necessarily
represent the nonlinear eleme
nt as described by a nonlinear differential equation as
(,)
(,)
( )
Tsy y F x sx
F x y sy y Kx
F x sy y Kx
To make the analysis easier, the nonlinear function F(x, sx) may be isolated in
the system, while all the linear relations are
joined in the block G(s). For example, if
the nonlinear element n is described, it can be split into two equations
：
( 1)
(,)
Ts y z
z F x sx
Then the equations are associated with the other linear elements of the system,
and the function F(x, sx) is is
olated in the block n. Naturally, the function F(x, sx)
does not represent the nonlinear element n and therefore will be called the
nonlinearity.
The linear elements may coupled in an arbitrary way to make the equivalent
transfer function G(s), whose orde
r is not theoretically limited as far as the parameter
plane analysis is concerned. However, certain restrictions on the nature of the function
G(s) are imposed in order to justify the application of the approximate
analysis.
According to the block diagram
of Figure4, the transfer function G(s) is
( )
( )
( )
C s
G s
B s
The function f=f(t), which may be either a desired input signal
or an undesired
perturbation, is applied somewhere in the linear part of the system. The block diagram
of Figure4 may represent a nonlinear system having two nonlinear elements
connected is cascade, providing it is possible to isolate the two related nonl
inearities
and join them in one equivalent block.
~
15
~
3.2
Describing Function
Among the methods used for stability analysis and investigation of sustained
nonlinear oscillations, sometimes called a limit cycle, the describing function
generally stands out
because of its usefulness in engineering problems of control
system analysis. The describing function technique can be successfully applied to
systems other than control whenever the sustained oscillations, which are based on
some nonlinear phenomena, repr
esent possible operating conditions.
The theoretical basis of the describing function analysis lies in the van der Pol
method of slowly varying coefficients as well as in the methods of harmonic balance
and equivalent linearization for solving certain pro
blems of nonlinear mechanics. The
analysis has been further developed in the work of Goldfarb with the emphasis on
nonlinear phenomena in feedback systems.
For presenting the concept of describing function method, a nonlinear time
invariant system with a
block diagram of Figure4 is considered. The block n
represents the isolated nonlinearity described by a given function, F(x, sx). The linear
part of the system is presented by a known transfer function G(s) = C(s)/B(s). The
external forcing function f=f(t)
is identically zero for all values of time t. Thus the free
oscillations in the system are determined by a nonlinear homogeneous differential
equation
( ) ( ) (,) 0,
d
B s x C s F x sx s
dt
3.3
Krylov

Bogoliubov Asymptotical Method
Since the parameter plane analysis of
nonlinear oscillations is based upon the
concept and results of the Krylov

Bogoliubov asymptotical method, the fundamental
aspects of the method. Then the derivations involved in further extensions and
applications of the method can be more easily followe
d. Furthermore, the method is
highly applicable to practical problems of nonlinear oscillations and represents a basis
for other approximate methods in nonlinear analysis, particularly the de
scribing
function technique.
The basis of approximate analysis o
f nonlinear oscillations is the small parameter
method introduced in connection with the three

body problem of celestial mechanics.
The fundamental concept and certain solution procedures have been p
ostulated in a
~
16
~
general form by Poincare. In this method a
second

order nonlinear
differential
equations describing the oscillations has been formulated so that it incorporates a
small parameter. The parameter is small in the sense that it represents a number of
sufficiently small absolute value.
For a zero val
ue of the parameter the nonlinear operation reduces to a linear
equation, the solution of which is a harmonic oscillation. The solution of the linear
equation is called the generating solution. The essential idea of the method is to
assume
the solution of
the nonlinear differential equation in the fo
rm of an infinite
power series.
Then, by substituting the solution into the original differential equation, a
recursive system of linear nonhomogeneous differential equations with constant
coefficients is
obtained. Based upon the generating solution, the recursive system can
be solved by elementary calculations up to a desired degree of accuracy. The small
parameter method has proved useful for solving numerous problems in physics and
the technical sciences
.
By considering certain nonlinear phenomena in electron tube oscillators,
van der
Pol proposed the method of slowly varying coefficients for evaluation of the related
periodic oscillations. This method is a variant of the the small parameter method,
which is heavily based upon the consideration of the first harmonic in the Four
ier
series expansion of the nonlinear
function, this being the keystone in the describing
function
analysis.
Furthermore, not only is the method convenient for the identification of periodic
solutions of second

order nonlinear differential equations, but
it also places in
evidence the manner in which the possible periodic solutions are established, after
small amplitude perturbations, around the solution. The method, however, has been
based on a rather intuitive approach and only the first approximation ha
s been
considered. From the approach it is not clear how the higher approximations can be
made.
~
17
~
4.
Stability of Nonlinear System
Here, we are going to introduce various methods for the input

output analysis of
nonlinear systems. The methods are divided int
o three categories:
1.
Optimal Linear Approximants for Nonlinear Systems. This is a
formalization
of a technique called the describing function technique, which is popular
for
a
quick analysis of the possibility of oscillation in a feedback loop with some
nonlinearities in the loop.
2.
Input

output Stability
. This is an extrinsic view to the stability of nonlinear
systems answering the question of when a bounded input produces input
produces a bounded output. This is to be
compared
with the intrinsic or state
space or Lyapunov approach to stability.
3.
Volterra Expansions for Nonlinear Systems.
This is an attempt to derive a
rigorous
“
frequency domain
”
representation of the input output behavior of
certain classes of nonlinear systems.
4.1
Optimal Linear Approximate
to Nonlinear Systems
In this section we will be interested in trying to approximate nonlinear systems
by linear ones, with the proviso that the "optimal" approximating linear system
varies
as a function of the input. We start with single

input single

outpu
t nonlinear
systems.
More precisely, we view a nonlinear system, in an input output sense, as a
map N
from C[0,
∞
[, the space of continuous functions on [0,
∞
[, to C([0,
∞
[).
Thus,
given an input u
∈
C ([
0
,
∞
[), we will assume that the output of the
nonlinear
system N is also a continuous function, denoted by
y
N
(
∙
)
, defined on [0,
∞
[:
y
N
=
(
)
∈
(
[
0
,
∞
[
)
We will now optimally approximate the nonlinear system for a given reference
input
u
0
∈
C
(
[
0
,
∞
[
)
by the output of a linear system. The class of line
ar systems,
denoted by W, which we will consider for optimal approximations are represented
in
convolution form as integral operators. Thus, for an input
u
0
∈
C
(
[
0
,
∞
[
)
, the
output
of the linear system W is given by
y
L
(
)
=
(
(
0
)
)
(
)
≔
∫
(
)
0
(
−
𝜏
)
𝜏
∞
−
∞
With
the understanding that
u
(
t
)
≡
0
for
t
≤
0
. T
h
e convolution kernel is chosen to
minimize the mean squared error defined by
e
(
w
)
=
lim
𝑇
→
∞
1
𝑇
∫
[
0
(
)
−
0
(
)
]
2
𝜏
𝑇
0
~
18
~
The following assumptions will be needed to
solve the optimization problem.
1.
Bounded

Input
Bounded

Output (b.i.b.o) Stability.
For given b, there exists
m
0
(
)
ℎ
ℎ

(
)

<
⇒

(
)

<
0
(
)
∈
[
0
,
∞
[
.
Thus, a
bounded input to the nonlinear system is assumed to produce a bounded output.
2.
Causal, Stable Approximators.
The cla
ss of approximating linear systems is
assumed causal and bounded input bounded output stable, i
.e.,
w
(
t
)
≡
0
t
<
0
,
and
∫

(
𝜏
)

𝜏
<
∞
∞
0
This equation guarantees that abounded input u() to the linear system W produces
a bounded output.
3.
Stationarity of Input. The input
u
0
(
∙
)
is stationary, i.e.,
lim
𝑇
→
∞
1
𝑇
∫

0
(
)

2
𝑇
E
xists uniformly is s. T
h
e terminology of a stationary deterministic signal is due
to Wiener in his theory of generalized harmonic analysis.
4.1.1
Optimal Linear Approximati
ons for Dynamic
Nonlinearities: Oscillations in Feedback Loops
We have studied how harmonic balance can be used to obtain the describing
function
gain of simple nonlinear systems

memoryless nonlinearities, hysteresis,
dead
zones, backlash, etc. The same
idea may be extended to dynamic nonlinearities.
Consider, for example,
+
3
2
̇
+
=
̈
W
ith forcing
=
𝑖
(
)
.
If the nonlinear system produces a periodic output
(this
is a very nontrivial assumption, since several rather simple nonlinear systems
behave
chaotically under periodic forcing), then one may write the solution y(t)
in the form
y
(
t
)
=
∑
𝑘
sin
(
+
𝜃
𝑘
)
∞
𝑘
=
1
Simplify and equate first harmonic terms yields
(
1
−
ω
2
)
1
sin
(
+
𝜃
)
+
3
4
1
3
cos
(
+
𝜃
)
=
𝑖
(
)
(
1
−
ω
2
)
1
+
(
3
4
1
3
)
2
=
2
,
𝜃
=
−
tan
−
1
3
1
2
4
(
1
−
ω
2
)
~
19
~
Thus, if one were to find the optimal linear, causal, b.i.b.o. stable approximant
system
of the nonlinear system (4.25), it would be have Fourier transform at
frequency
w given by what has been referred to as the describing function gain
1
4.2
Input

Output Stability
Up to this point, a great deal of the discussion has been based on a state space
description of a nonlinear system of the form
x
=
f
(
t
,
x
,
u
)
,
y
=
h
(
t
,
x
,
u
)
or
x
k
1
=
(
,
𝑘
,
𝑘
)
,
y
k
=
ℎ
(
,
𝑘
,
𝑘
)
.
One can also think of this
equation
from th input

output point of view. T
h
us, for
example, given an initial condition
x
(
0
)
=
x
0
and an input u() defined on the
interval
[
0
,
∞
[
, say piecewise
continuous
, and with suitable conditions on f() to make
the different
ial equation have a unique solution on
[
0
,
∞
[
with no finite escape time,
it follows that there is a map
N
x
0
from the input u() to the output y(). It is important
to
remember
that the map depends
on the initial state x_0 of the system.
Of course, if
the ve
ctor field f and function h ar
e affine in x, u
then the
response of the system can
be broken up into a part depending on the initial state
and a part depending on the
input. More abstractly, one can just define a nonlinear
system as a map (possibly
depende
nt on the initial state) from a suitably defined
input space to a suitable output
space. The input and output spaces are suitably
defined vector spaces. Thus, the first
topic in formalizing and defining the notion
of a nonlinear system as a nonlinear
opera
tor is the choice of input and output
spaces. We will deal with the continuous
time and discrete time cases together. To
do so, recall that a function
g():
[
0
,
∞
[
↦
ℝ
(
respectively
,
g
(
∙
)
:
Z
↦
ℝ
)
is said to belong to
L
p
[
0
,
∞
[
(
respectively
,
l
p
(
Z
)
if it is measu
rable and in addition
∫

(
)

𝑝
∞
0
<
∞
,
.
∑

(
)

𝑝
∞
𝑛
=
0
<
∞
Also, the set of all bounded functions is referred to as
L
∞
[
0
,
∞
[
(
respectively
,
l
∞
(
Z
)
. The
L
p
(
𝑝
)
norm of a function
g
∈
L
p
[
0
,
∞
[
(
respectively
,
l
p
(
Z
)
is
~
20
~
defined to be

(
∙
)

p
≔
(
∫

(
)

𝑝
∞
0
)
1
𝑝
.

(
∙
)

p
≔
(
∑

(
)

𝑝
∞
𝑛
=
0
)
1
𝑝
,
A
nd the
L
∞
norm is defined to be

(
∙
)

∞
≔

(
)

.
sup
𝑛

(
)

.
Unlike norms of finite dimensional spaces, norms of infinite dimensional spaces are
not equivalent, and thus they induce different norms on the space of f
unctions.
4.3
Volterra Input

Ouput Representations
In this section we will restrict our attention to single input single output (SISO)
systems.
The material in this section may be extended to multiple input multiple
output
systems with a considerable
increase in notational complexity deriving from
multilinear
algebra in many variables. In an input

output context, linear time

invariant
systems of a very general class may be represented by convolution operators of
the
form
y
(
t
)
=
∫
ℎ
(
−
𝜏
)
(
𝜏
)
𝜏
𝑡
−
∞
.
Figu
re
4

1
A graphical interpretation of the Popov criterion
Here the fact that the integral has upper limit t models a causal linear system and
the lower limit of
−
∞
models the lack of an initial condition in the system
description
(hence, the entire past history of the system).
In contrast to previous
sections, where
the dependence of the input output operator on the initial condition
was explicit,
here we will replace this dependence on the initial condition by having
the limits
o
f integration going from
−
∞
rather than 0.
In this section, we will
explore the
properties of a nonlinear generalization of the form
~
21
~
y
(
t
)
=
∑
∫
∫
⋯
∫
ℎ
(
−
𝜏
1
,
𝜏
1
−
𝜏
2
,
⋯
,
𝜏
𝑛
−
1
−
𝜏
𝑛
)
𝜏
1
⋯
𝜏
𝑛
𝜏
𝑛
−
−
∞
𝜏
−
∞
𝑡
−
∞
∞
𝑛
=
1
.
This is to be thought of as a polynomial
or Taylor
series expansion for the function y()
in terms of the function u(). Historically, Volterra introduced the terminology
“
function of a function
,”
or actually
“
function of lines,
”
and defined the derivatives of
such functions of functions or
“
functionals,
”
In
deed, then, if F denotes the
operator( functional) taking input functions u() to output function y(), then the terms
listed above correspond to the summation of the n

th term of the power series for F.
The first use of the Volterra representation in nonlin
ear system theory was by Winener
and hence representations of the form of are referred to as Volterra

Wiener series.
Our
development follows
that of Boyd, Chua and Desoer [37], and Rugh [248], which
have a nice extended
treatment of the subject.
4.4
Lyapunov
Stability Therory
The study of the stability of dynamical systems has a very rich history. Many
famous mathematicians, physicists, and astronomers worked on axiomatizing the
concepts of stability. A problem, which attracted a great deal of early interest
was
the
problem of stability of the solar system, generalized under the title "the N

body
stability problem." One of the first to state formally what he called the principle of
"least total energy" was Torricelli (1608

1647), who said that a system of bodi
es
was
at a stable equilibrium point if it was a point of (locally) minimal total energy.
In the
middle of the eighteenth century, Laplace and Lagrange took the Torricelli
principle
one step further: They showed that if the system is conservative (that is,
it
conserves
total energy

kinetic plus potential), then a state corresponding to zero
kinetic energy
and minimum potential energy is a stable equilibrium point. In turn,
several others
showed that Torricelli's principle also holds when the systems are
dis
sipative, i.e.,
total energy decreases along trajectories of the system. However,
the abstract
definition of stability for a dynamical system not necessarily derived
for a
conservative or dissipative system and a characterization of stability were
not made
till 1892 by a Russian mathematician/engineer, Lyapunov, in response
to certain open
problems in determining stable configurations of rotating bodies
of fluids posed by
Poincar6.
At heart, the theorems of Lyapunov are in the spirit of Torricelli's
principle.
They
give a precise characterization of those functions that qualify as "valid energy
~
22
~
functions" in the vicinity of equilibrium points and the notion that these "energy
functions" decrease along the trajectories of the dynamical systems in quest
ion.
These
precise concepts were combined with careful definitions of different notions
of
stability to give some very powerful theorems.
For a
general
differential equations of the form
x
=
f
(
x
,
t
)
,
x
(
t
0
)
=
0
where
x
∈
ℝ
n
and
t
≥
0
. The system is said to be linear
if
f
(
x
,
t
)
=
A
(
t
)
for some
A
(
∙
)
∶
ℝ
↦
ℝ
n
×
n
and nonlinear otherwise. We will assume that f(x, t) is piecewise
continuous with respect to t, that is, there are only finite many discontinuity points in
any compact se
t
.
The notation
B
h
will be short

hand for B(0, h
), the ball of radius h
centered at 0. Properties will be said to be true
L
ocally if they are true for all
x
0
in some ball
B
h
G
lobally if they are true for all
x
0
∈
ℝ
𝑛
S
emi

globally if they are true for all
x
0
∈
ℎ
with h arbitrary
U
niformly if they are true for all
t
0
≥
0
4.4.1
Basic Theorems
Generally speaking, the basic theorem of Lyapunov states that when v(x, t) is a
p.d.f or an l.p.d.f and
dv
(
x
,
t
)
dt
≤
0
then we can conclude stability of the equilibrium
point. The time derivative is
dv
(
x
,
t
)
dt
=
∂
v
(
x
,
t
)
∂
t
+
∂
v
(
x
,
t
)
∂
x
f
(
x
,
t
)
The rate of change of v(x,t) along the trajectories of the vector field is also called
the Lie derivative of v(x,t) along f(x,t). In the statement of the following theorem
recall that we have translated the origin to lie at the e
quilibrium point under
consideration.
Table 4

1 Basic Theorems
Conditions on
v(x,t)
Conditions on

v(x,t)
Conclusions
1
l.p.d.f
≥
0 locally
stable
2
l.p.d.f, decrescent
≥
0 locally
U
niformly stable
3
l.p.d.f., decrescent
l.p.d.f
U
niformly
asymptotically stable
4
p.d.f., decrescent
p.d.f.
G
lobally
uniform
asymp. stable
~
23
~
4.4.2
Exponential Stability Theorems
Assume that
f
(
x
,
t
)
:
ℝ
×
ℝ
n
↦
ℝ
n
has continuous first partial derivatives in x
and is piecewise continuous in t. Then the two statements below are equivalent:
1.
x = 0 is a locally exponentially stable equilibrium point of
x
̇
=
f
(
x
,
t
)
i.e., if
x
∈
B
h
for h small enough, there exist m,
𝛼
>0
such that

𝜙
(
𝜏
,
,
)

≤
−
𝛼
(
𝜏
−
𝑡
)
.
2.
There exists a function v(x,t) and some constants
h
,
α
1
,
𝛼
2
,
𝛼
3
,
𝛼
4
>
0
such that
for all
x
∈
B
h
,
≥
0
α
1


2
≤
(
,
)
≤
𝛼
2


2
,
dv
(
x
,
t
)
dt
≤
−
𝛼
3


2
,

(
,
)
x

≤
−
𝛼
4


4.4.3
LaSalle
’
s Invarian Principle
LaSAlle
’
s invariance
principle has two main applications:
1.
It enables one to conclude asymptotic stability even when

v(x, t) is not an
l.p.d.f.
2.
It enables one to prove that trajectories of the differential equation starting in a
given region converge to one of many equilibrium
points in that region.
H
owever, the principle applies primarily to autonomous or periodic systems, which
are discussed in this section.
Let
v
:
ℝ
n
→
ℝ
be continuously
differentiable
and suppose that
Ω
c
=
{
∈
ℝ
n
:
(
)
≤
}
Is
bounded and
v
≤
0
for all
∈
Ω
c
. Define
S
⊂
Ω
c
by
S
=
{
∈
Ω
c
:
v
(
x
)
=
0
}
A
nd let M be the largest invariant set in S. T
h
en whenever
0
∈
Ω
c
,
𝜙
(
,
0
,
0
)
approaches M as
t
→
∞
.
4.4.4
Generalizations of LaSalle
’
s Principle
LaSalle's invariance principle is restricted in applications because it holds onl
y
for time

invariant and periodic systems. For extending the result to arbitrary
timevarying
systems, two difficulties arise:
1.
{x: v(x, t) = 0} may be a time

varying set.
~
24
~
2.
The
ω
limit set of a trajectory is itself not variant.
However, if we have the
hypothesis that
̇
(
,
)
≤
−
ω
(
x
)
≤
0
T
hen the set S may be defined to be
{
x
:
ω
(
x
)
=
0
}
A
nd we may state the following generalization of LaSalle
’
s theorem as in the
following paragraph.
Assume that the
vector field f(x,t) is locally Lipschitz continuous in x, unifor
mly
in t, in a ball of radius r.
Let v(x,t) satisfy for functions
α
1
,
𝛼
2
of class K
α
1
(


)
≤
(
,
)
≤
α
2
(


)
.
Futher, for some non

negative function
ω
(
x
)
, assume that
̇
(
,
)
=
+
(
,
)
≤
−
ω
(
x
)
≤
0
.
Then for all

x
(
t
0
)

≤
α
2
−
1
(
α
1
(
)
)
, the trajectories x()
are bounded and
lim
𝑡
→
∞
ω
(
x
(
t
)
)
=
0
.
4.4.5
Instability Theorems
Lyapunov's theorem presented in the previous section gives a sufficient condition
for establishing the stability of an equilibrium point. In this section we will give
some
sufficient conditions for
establishing the instability of an equilibrium point.
The equilibrium point 0 is unstable at time
t
0
if there exists a decrescent
function
v
:
ℝ
n
×
ℝ
↦
ℝ
such that
1.
̇
(
,
)
is an l.p.d.f.
2.
v
(
0
,
t
)
=
0
and there exist points x arbitrarily close to 0 such that
v
(
x
,
t
0
)
>
0
.
~
25
~
5.
The Applications
5.1
View of Random Process
Recall that a system Y (μ, t) = T[X(μ, t)] is called memoryless iff the output Y (μ,
t) is a function of the input X(μ, t) only for the same time instant. For example, Y (μ, t)
= X(μ, t
−
τ ) and Y (μ, t) = X
2
(μ, t) are memoryless systems. Note that Y (μ, t) = X
2
(μ,
t) is nonlinear. We are here interested in memoryless nonlinear systems whose input
and output are both real

valued and can be characterized by Y (μ, t) = g(X(μ, t))
where g(x)
is a function of x.
Figure5

1
：
A nonlinear memoryless system
For memoryless nonlinear systems,
* if X(μ, t) is
Strict

Sense Stationary processes
, so is Y (μ, t);
* if X(μ, t) is stationary of order N, so is Y (μ, t);
* if X(μ, t) is
Wide

Sense
Stationary processes
, Y (μ, t)
may not be
stationary in any sense.
Therefore, the second

moment description of X(μ, t) is not sufficient
for
second

moment description for memoryless nonlinear systems.
Examples of Nonlinearity:
* Full

Wave Square Law:
g(x) = ax
2
.
~
26
~
Figure5

2
：
g(x) = ax
2
* Half

Wave Linear Law: g(x) = ax ∙ u(x) with u(x) a unit step function.
Figure5

3
：
g(x) = ax ∙ u(x)
* Hysteresis Law
Figure5

4
：
Hysteresis Law
* Ha
rd Limiter
Figure5

5
：
Ha
rd
l
imiter
* Soft Limiter
Figure5

6
：
Soft
l
imiter
Most of the nonlinear analytical methods concentrate on the second order
statistical description of input and output processes, namely autocorrelations and
power spectrums. One famous approach is the direct method which deals with
probability den
sity functions and is good for use if X(μ, t) is Gaussian.
~
27
~
Consider the memoryless nonlinear system Y (μ, t) = g(X(μ, t)) where
the
first

order and second

order densities of input process X(μ, t), namely
f
X
(x; t) and
f
X
(x
1
, x
2
; t
1
, t
2
), are given.
Figure5

7
：
A
memoryless nonlinear system
Now, the following statistics of output Y (μ, t) can be obtained
(;) (;)  ( ) 
Y x i i
allroots
f y t f x t J x
{ (,)} ( ) (;)
n n
x
E Y u t g x f x t dx
1 2 1 2
1 2 1 2 1 2 1 2
(,) { ( (,)) ( (,))}
( ) ( ) (,;,)
Y
x
R t t E g X u t g X u t
g x g x f x x t t dx dx
Consider some examples below.
Full

Wave Square Law Device: Y (μ, t)
= aX
2
(μ, t) with a > 0.
1
(;) [ (/;) (/;), 0
2
Y x x
f y t f y a t f y a t y
ay
And F
Y
(y;
t) = 0
for y<0.
Square

Law Detector:
~
28
~
Figure
5

8
Square

Law
d
etector
The zonal LPF allows a spectral band to pass undistorted and everything else is
filtered perfectly.
Assume further that
X(μ, t) is a narrowband WSS Gaussian noise process n(μ, t)
with zero mean and autocorrelation RX(τ ). Thus, X(μ, t) can be expressed in the
polar form
(,) (,) (,)cos(2 (,))
c
X u t n u t v u t f t u t
For fixed t, v(μ, t) and θ(μ, t) are independent random variables where v(μ,
t) is
Rayleigh distributed with
2
(;) exp{ }, 0
(0) 2 (0)
v
n n
v v
f v t v
R R
and f
v
(v; t) = 0 for v < 0, and θ(μ, t) is uniform in [0, 2π). Here, σ
2
n
= R
n
(0) = E{n
2
(μ,
t)} and E{v
2
(μ, t)} = 2R
n
(0).
Note that the zonal LPF filters out all information about
frequency and phase.
When the bandwidth of the zonal LPF is much smaller than f
c
,
we have
2 2
2 2
(,) (,)cos (2 (,))
[ (,) (,)cos(4 2 (,))]
2
c
c
Y u t av u t f t u t
a
v u t v u t f t u t
2
(,) (,)
2
a
Z u t v u t
Consider further that S
n
(f) = A for f
−
fc
2
<
B/2 and f+fc
2
< B/2 and S
n
(f) = 0
~
29
~
otherwise, with A,B > 0. Zonal bandwidth is assumed larger than 2B.
Figure
5

9
In the case, R
X
(0) = R
n
(0) = 2AB and the following can be obtained.
E
{
Y
n
(
μ
,
t
)
}
=
a
n
{
2
𝑛
(
μ
,
)
}
=
𝑛
(
2
)
!
2
𝑛
!
(
𝑅
𝑛
(
0
)
)
𝑛
Var
{
Y
(
μ
,
t
)
}
=
E
{
Y
2
(
μ
,
t
)
}
−
E
2
{
(
μ
,
t
)
}
=
3
a
2
∙
4
2
2
−
(
∙
2
)
2
=
8
2
2
2
E
{
Z
(
μ
,
t
)
}
=
a
2
{
2
(
μ
,
t
)
}
=
a
2
2
𝑅
𝑛
(
0
)
=
2
5.2
Example of the Application of Lyapunov
’
s Theorem
Consider the following model of an RLC circuit with a linear inductor, nonlinear
capacitor, and in
ductor as shown in the Figure 5

10
. This
is also a model for a
mechanical system with a mass coupled to a nonlinear spring and nonlinear
damper
as shown in Figure
5

10
. Using as state variables xl, the charge on the
capacitor
(respectively, the position of the block) and X2, the current through
the
inductor
(respectively, the velocity of the block) the equations describing
the system are
1
̇
=
2
,
2
=
−
(
2
)
−
(
1
)
.
̇
Here f is a continuous function modeling the resistor current voltage characteristic,
and g the capacitor charge

voltage characteristi
c (respectively the friction and
restoring force models in the mechanical analog). We will assume that f, g both model
locally passive elements, i.e., there exists a
0
such that
𝜎
(
𝜎
)
≥
0
𝜎
∈
[
−
𝜎
0
,
𝜎
0
]
,
𝜎
(
𝜎
)
≥
0
𝜎
∈
[
−
𝜎
0
,
𝜎
0
]
,
~
30
~
Figure
5

10 An RLC
circuit
and its mechanical analogue
The Lyapunov function candidate is the total energy of the system, namely,
(
)
=
2
2
2
+
∫
(
𝜎
)
𝜎
𝑥
0
.
The first term is the energy stored in the inductor (kinetic energy of the body)
and the
second term the energy stored
in the capacitor (potential energy stored in
the spring).
The function v(x) is an l.p.d.f., provided that
(
𝜎
1
)
is not identically
zero on any
interval (verify that this follows from the passivity of g). Also,
̇
(
)
=
2
[
−
(
2
)
−
(
1
)
+
(
1
)
2
=
−
2
(
2
)
≤
0
W
here

x
2

is less than
𝜎
0
.
This establishes the stability but not asymptotic stability
of the origin. In point of fact, the origin is actually asymptotically stable,
but this
needs the LaSalle principle, which is deferred to a later section.
5.3
Swing
Equation
The dynamics of a single synchronous generator coupled to an infinite bus is
given by
𝜃
̇
=
,
̇
=
−
−
1
−
−
1
(
𝑃
−
𝑖
(
𝜃
)
)
H
ere
𝜃
is the angle of the rotor of the generator measured relative to a synchronously
spinning reference frame and
its time derivative is Co. Also M is the moment of
inertia of the generator and D its damping both in normalized units; P is the
exogenous power input to the generator from the turbine and B
the susceptance of the
line connecting the generator to the rest
of the network, modeled as an infinit
e bus
(see Figure 5

11
) A choice of Lyapunov function is
(
𝜃
,
)
=
−
2
yielding the stability of the equilibrium point. As in the previous example, one
cannot
conclude asymptotic stability of the equilibrium point fro
m this analysis.
~
31
~
Figure
5

11 A generator coupled to an infinite bus
6.
C
onclusion
The development of nonlinear methods faces real difficulties for various reasons.
There are no universal mathematical methods for the solution of nonlinear differential
equations which are the mathematical models of nonlinear system. The methods deal
with specific classes of nonlinear equations and have only limited applicability to
system analysis. The classification of a given system and the choice of an appropriate
met
hod of analysis are not at all an easy task. Furthermore, even in simple nonlinear
problem, there are numerous new phenomena qualitatively different from those
expected in linear system behavior, and it is impossible to encompass all these
phenomena in a s
ingle and unique method of analysis.
Although there is no universal approach to the analysis of nonlinear systems, by
excluding specific techniques we can still conclude that the nonlinear methods
generally fall under one of three following approached
：
the
phase

space topological
techniques, stability analysis method, and the approximate methods of nonlinear
analysis. This classification of the nonlinear methods is rather subjective but can be
useful in systematization of their review.
Moreover, we
introduce the stability of nonlinear system
.
There are various
methods
analyzing
the stability of nonlinear system. Limited of time, w
e
only
talk
about
some important idea
, including input

output analysis, Lyapunov stability theory,
and LaSalle
’
s principle.
~
32
~
7.
References
[1]
Black H.S., Stabilized feedback amplifiers,
Bell System Techn. J.
, 13, 1
–
18,
1934.
[2]
Bogoliubov N.N., and Mitropolskiy Yu.A.,
Asymptotic Methods in the Theory
of Non

Linear Oscillations
, New York, Gordon and Breach, 1961.
[3
]
Director, S.W., and Rohrer, R.A.,
Introduction to Systems Theory
, McGraw

Hill,
New York, 1972.
[4]
Doyle J.C., Francis B.A., Tannenbaum A.R.,
Feedback Control Theory
,
Macmillan Publishing Company, New York, 1992.
[5]
Dulac, H.,
Signals, Systems, and
Transforms
, 3rd ed., Prentice Hall, New

York,
2002.
[6]
Gelb, A., and Velde, W.E.,
Multiple

Input Describing Functions and Nonlinear
System Design
, McGraw

Hill, New York, 1968.
[7]
Guckenheimer, J., Holmes, P.,
Nonlinear Oscillations, Dynamical Systems, a
nd
Bifurcations of Vector Fields
, 7th printing, Springer

Verlag, New

York, 2002.
[8]
Hayfeh A.H., and Mook D.T.,
Nonlinear Oscillations
, New York, John Wiley &
Sons, 1999.
[9]
Haykin, S., and Van Veen, B.,
Signals and Systems,
2nd ed., New

York, Wiley
& So
ns, 2002.
[10]
Hilborn, R.,
Chaos and Nonlinear Dynamics: An Introduction for Scientistsand
Engineers
, 2nd ed., Oxford University Press, New

York, 2004.
[11]
Jordan D.W., and Smith P.,
Nonlinear Ordinary Differential Equations: An
Introduction to Dynamical
Systems
, 3rd ed., New York, Oxford Univ. Press,
1999.
[12]
Khalil H.K.,
Nonlinear systems
, Prentice

Hall, 3rd. Edition, Upper Saddle River,
2002.
[13]
Rugh W. J.,
Nonlinear System Theory: The Volterra/Wiener Approach
,
Baltimore, John Hopkins Univ. Press,
1981.
[14]
Samarskiy, A.A., and Gulin, A.V.,
Numerical Methods
, Nauka, Moscow, 1989.
[15]
Sandberg, I.W., On the response of nonolinear control systems to periodic input
signals,
Bell Syst. Tech. J.
, 43, 1964.
[16]
Sastry, S.,
Nonlinear Systems: Analysis,
Stability and Control
, Springer

Berlag,
New York, 1999.
[17]
Shmaliy, Yu. S.,
Continuous

Time Signals,
Springer, Dordrecht, 2006.
[18]
Verhulst, F.,
Nonlinear Differential Equations and Dynamic Systems
,
Springer

Verlag, Berlin, 1990.
~
33
~
[19
]
Wiener, N.
Respon
se of a Non

Linear Device to Noise
, Report No. 129,
Radiation Laboratory, M.I.T., Cambridge, MA, Apr. 1942.
[20
]
Zames, G., Realizability conditions for nonlinear feedback systems,
IEEE Trans.
Circuit Theory
, Ct

11, 186
–
194, 1964.
[21]
Sastry, S. Nonlinear
Systems
–
Analysis, Stability, and Control, Springer

Verlag
New York Berlin Heidelberg, 1999
Comments 0
Log in to post a comment