# 1.Discuss the various mathematical models.

AI and Robotics

Nov 7, 2013 (7 years and 10 months ago)

330 views

Quantitative Techniques

1.

Discuss the various mathematical models.

A

mathematical model

is a description of a

system

using

mathematical

concepts and language. The
process of developing a mathematical model is termed

mathematical modeling
. Mathematical models
are used not only in the

natural sciences

(such as

physics
,

biology
,

earth science
,

meteorology
)
and

engineering

disciplines (e.g.

computer science
,

artificial intelligence
)
, but also in the

social
sciences

(such as

economics
,

psychology
,

sociology

and

poli
tical
science
);

physicists
,

engineers
,

sta
tisticians
,

operations research

analysts and

economists

use
mathematical models most extensively
. A model may help to explain a system and to study the effects of
different components, and to make predictions about behaviour.

Mathematical models can take many forms, including but not limited to

dynamical systems
,

statistical
models
,

diffe
rential equations
, or

game theoretic models
. These and other types of models can overlap,
with a given model involving a variety of abstract structures. In general, mathematical
models may
include

logical models
, as far as logic is taken as a part of mathematics. In many cases, the quality of a
scientific field depends on how well the mathematical models

developed on the theoretical side agree
with results of repeatable experiments. Lack of agreement between theoretical mathematical models and
experimental measurements often leads to important advances as better theories are developed.

Many everyday activ
ities carried out without a thought are uses of mathematical models. A
geographical

map projection

of a region of the earth onto a small, plane surface is a model
[1]

which
can be used for many purposes such as planning travel.

Another simple activity is predicting the position of a vehicle from its initial position, direction and
speed of travel,
using the equation that distance travelled is the product of time and speed. This is
known as

when used more formally. Mathematical modelling in this way does no
t
necessarily require formal mathematics; animals have been shown to use dead reckoning
[2]
[3]
.

Population

Growth
. A simple (though approximate) model of population growth is the

Malthusian
growth model
. A slightly more realistic and largely used population growth model is the

logistic
function
, and its extensions.

Model of a particle in a potenti
al
-
field
. In this model we consider a particle as being a point of mass
which describes a trajectory in space which is modeled by a function giving its coordinates in space
as a function of time. The potential field is given by a function

V

:

R
3

R

and th
e trajectory is a
solution of the

differential equation

Note this model assumes the particle is a point mass, which is certainly known to be false in
many cases

in which we use this model; for example, as a model of planetary motion.

Model of rational behavior for a consumer
. In this model we assume a consumer faces a
choice of

n

commodities labeled 1,2,...,
n

each with a market price

p
1
,

p
2
,...,

p
n
. The
consumer
is assumed to have a

cardinal

utility function

U

(cardinal in the sense that it
assigns numerical values to utilities), depending on the amounts of
commodities

x
1
,

x
2
,...,

x
n

consumed. The model further assumes that the consumer has a
budget

M

which is
used to purchase a vector

x
1
,

x
2
,...,

x
n

in such a way as to
maximize

U
(
x
1
,

x
2
,...,

x
n
). The problem of rational behavior in this model then becomes
an
o
ptimization

problem, that is:

subject to:

2.

Explain

the various methods to find initial solution for
transpiration

problem with an
example.

transpiration problems (sweaty armpits, sweaty
hands)

The sweat glands of excessively sweaty underarms cannot be neutralised by deodorants
indefinitely. After all, some ingredients in deodorants blocking the pores, are not well tolerated
by everyone.

temporary treatment

Sweat secretion can be diminished or s
topped for a few months by local injection of

Botulinum
toxin

(brand names: Dysport®, Botox®). Especially for people with very excessive armpit
perspiration with or without odour

problems, this treatment can be very effective. The average
duration of effectiveness is six to eight months.

These treatments can be given during consultation. Injections for sweaty armpits are usually well
tolerated, if not, local anaesthetic can fairly

simply be administered without directly influencing
dexterity.

As the palms of the hands and soles of the feet are very sensitive, a local anaesthetic is required
to treat sweaty hands and feet. As a result, it is impossible to drive a car for several ho
urs after
treatment.

permanent, surgical diminishing of armpit transpiration

Through an incision of one centimetre at most, under local anaesthesia, the skin of the armpits
can be released from the underlying fat. The sweat glands are scraped from the und
ersurface of
the skin by modified liposculpture. Admission is limited to a few hours. There is some discomfort,
but light work can be resumed the next day. The skin wil adhere to the fat layer like a graft.

Usually, healing is uneventful. Because of the ve
ry short incision there is much less visible scarring
than after open surgery on the armpits. Temporary hardening of the operated zone is normal.

There is a small chance that a complication will arise by which the skin does not completely take,
resulting i
n an open wound. This does require appropriate dressings and wound care for
considerable time.

3.

What is normal distribution? Explain the properties of normal distribution.

In

probability theory
, the

normal

(or

Gaussian
)

distribution

is a

continuous probability distribution

that
has a bell
-
shaped

probability density function
, known as the

Gaussian function

or informally the

bell
curve
:
[nb 1]

where parameter

μ

is the

mean

or

expectation

(location of the peak) and

σ
2

is the variance.

σ

is
known as the

standard deviation
. The distribution with

μ

= 0

and

σ
2

= 1

is called the

standard
normal
. A normal distribution is often used as a
first approximation to describe real
-
valued

random
variables

that cluster around a single

mean

value.

The normal
distribution is considered the most prominent probability distribution in

statistics
. There
are several reasons for this:
[1]

First, the normal distribution is very tractable analytically, that is, a
large number of results involving this distribution can be derived in explicit form. Second, the normal
distribution arises as the outcome of the

central limit theorem
, which states that under mild conditions
the sum of a large number of

random variables

is distributed approximately normally. Finally, the
"bell" shape of the normal distribution makes it a convenient choice for modelling a large variety of
random variables encountered in practice.

For this reason, the normal distribution i
s commonly encountered in practice, and is used throughout
statistics,

natural sciences
, and
social sciences
[2]

as a simple model for complex phenomena. For
example, the

observa
tional error

in an experiment is usually assumed to follow a normal distribution,
and the

propagation of uncertainty

is computed using this assumption.
Note that a normally
-
distributed variable has a symmetric distribution about its mean. Quantities that

grow exponentially
,
such as prices, incomes or populations, are o
ften

skewed to the right
, and hence may be better
described by other distributions, such as the

log
-
normal distribution

or

Pareto distribution
the probability of seeing a normally
-
distributed value that is far (i.e. more than a few

standard
deviations
) from the mean drops off extremely rapidly. As a result,

statist
ical inference

using a normal
distribution is not robust to the presence of

outliers
(data that is unexpectedly far from the mean, due
to exceptional circumstances, observational error,
etc.). When outliers are expected, data may be
better described using a

heavy
-
tailed

distribution such as the

Student's t
-
distribution
.

From a technical perspective, alternative characterizations are possible, for example:

The normal distribution is the only

absolutely continuous

distribution all of
whose

cumulants

beyond the first two (i.e. other than the
mean

and

variance
) are zero.

For a given mean and variance, the corresponding normal distribution is the continuous
distribution with the

maximum entropy
.
[3]
[4]

The normal distributions are a sub
-
class of the

elliptical distributions
.

4.

What are the assumptions made in a waiting line method?

The

method of lines

(MO
L, NMOL, NUMOL) (
Schiesser, 1991
;

Hamdi, et al., 2007
;

Schiesser, 2009

)
is a technique for solving

partial differential equat
ions

(PDEs) in which all but one dimension is
discretized. MOL allows standard, general
-
purpose methods and software, developed for the numerical
integration of ODEs and DAEs, to be used. A large number of integration routines have been developed
over the
years in many different programming languages, and some have been published as

open
source

resources; see for example

Lee and Schiesser (2004)
.

The method of lines most often refers to the construction or analysis of numerical methods for partial
differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time
variable
continuous. This leads to a system of ordinary differential equations to which a numerical method
for initial value ordinary equations can be applied. The method of lines in this context dates back to at
least the early 1960s

Sarmin and Chudov
. Many papers discussing the accuracy and stability of the
method of lines for various types of partial differential equations have appeared since (for
example

Zafarullah

or

Verwer and Sanz
-
Serna
). W. E. Schiesser of

Lehigh University

is one of the major
proponents of the method of lines, having published widely in this field.

[
edit
]
Application to elliptical equations

MOL requires that the PDE problem is well
-
posed as an initial valu
e (
Cauchy
) problem in at least one
dimension, because ODE and DAE integrators are

initial value problem

(IVP) solvers.

Thus it cannot be used directly on purely elliptic equations, such as

Laplace's equation
. However, MOL
has been used to solve Lap
lace's equation by using the

method of false transients

(
Schiesser,
1991
;

Schiesser, 1994
). In this method, a
time derivative of the dependent variable is added to Laplace’s
equation. Finite differences are then used to approximate the spatial derivatives, and the resulting system
of equations is solved by MOL. It is also possible to solve elliptical problems by a

semi
-
analytical method
of lines

(
Subramanian, 2004
). In this method the discretization process results in a set of ODE's that are
solved by exploiting properties of the associated ex
ponential matrix. For a sample code,
visit

http://www.maple.eece.wustl.edu
.

5.

What does ‘significant’ mean

In

statistics
, a result is
called "statistically significant" if it is unlikely to have occurred by

chance
. The
phrase

test of significance

was coined by

Ronald Fisher
.
[1]

As used

in statistics,

significant

does not
mean

important

or

meaningful
, as it does in everyday speech. Research analysts who focus solely on
significant results may miss important response patterns which individually may fall under the threshold
set for tests o
f significance. Many researchers urge that tests of significance should always be
accompanied by

effect
-
size

statistics, which approximate the size and thus the practical importance
of the
difference.

The amount of evidence required to accept that an event is unlikely to have arisen by chance is known as
the

significance level

or critical

p
-
value

Fisherian

statistical hypothesis testing
, the p
-
value
is the proba
bility of observing data at least as extreme as that observed,

given that the null hypothesis is
true
. If the obtained p
-
value is small then it can be said either the

null hy
pothesis

is false or an unusual
event has occurred. P
-
values do not have any repeat sampling interpretation.
[
citation needed
][
clarification needed
]

An alternative (but nevertheless related) statistical hypothesis testing framework is the

Neyman

Pearson

frequentist school which requires both a null and an alternative hypothesis to be defined and
investigates the repeat sampling properties of the procedure, i.e. the probability that a dec
ision to reject
the null hypothesis will be made when it is in fact true and should not have been rejected (this is called a
"false positive" or

Type I error
) and the probability t
hat a decision will be made to accept the null
hypothesis when it is in fact false (
Type II error
). Fisherian p
-
values are philosophically different from
Neyman

Pearson Type I er
rors. This confusion is unfortunately propagated by many statistics
textbooks.
[2]

6.

Explain the structure of M/M/1 Model for infinite population.

7.

Why is sampling
important?

Application to probabilistic inference

Such methods are frequently used to estimate posterior densities or expectations in state and/or
parameter estimation problems in probabilistic models that are too hard to treat analytically, for example

in

Bayesian networks
.

[
edi
t
]
Application to simulation

Importance sampling

is a

variance reduction

technique that can be used in the

Monte Carlo method
.
The idea behind importance sampling is that certain values of the input

random variables

in
a
simulation

have more impact on the parameter being estimated than others. If these "important" values
are emphasized by sampling more frequently, then the

estimator

variance can be reduced. Hence, the
basic methodology in importance sampling is to choose a distribution which "encourages" the important
values. This use of "biased" distributio
ns will result in a biased estimator if it is applied directly in the
simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution,
and this ensures that the new importance sampling estimator is unbiased. The w
eight is given by
the

likelihood ratio
, that is, the

Nikodym derivative

of the true underlying distribution with respect
to the biased simulation distribution.

The fundamental issue in implementing importance sampling simulation is the choice of the biased
distribution which encourages the important regions
of the input variables. Choosing or designing a good
biased distribution is the "art" of importance sampling. The rewards for a good distribution can be huge
run
-
time savings; the penalty for a bad distribution can be longer run times than for a general Mo
nte Carlo
simulation without importance sampling.

[
edit
]
Mathematical approach

Consider estimating by simulation the

probability

of an event

, where

X

is a random variable
with

distribution

F

and

probability density function

, where prime denotes
derivative
.
A

K
-
length

independent and identically distributed

(i.i.d.) sequence

is generated from the
distribution

F
, and the number

k
t

of random vari
ables that lie above the threshold

t

are counted. The
random variable

k
t

is characterized by the

Binomial distribution

One can show that

, and

, so in the
limit

we are able to obtain

p
t
. Note that the variance is low if

. Importance
sampling is concerned with the determination and use of an alternate density function

(for X),
usually referred to as a biasing density, for the simulation experiment. This d
ensity allows the
event

to occur more frequently, so the sequence lengths

K

gets smaller for a
given

estimator

variance. Alternatively, for a given

K
, use of the biasing density result
s in a variance
smaller than that of the conventional Monte Carlo estimate. From the definition of

, we can
introduce

as below.

where

is

a likelihood ratio and is referred to as the weighting function. The last equality in the above
equation motivates the estimator

This is the importance sampling estimator of

and is unbiased. That is, the estimation
procedure is to generate i.i.d. samp
les from

and for each sample which exceeds

,
the estimate is incremented by the weight

evaluated at the sample value. The
results are averaged over

trials. The variance of the importance sampling estimator
is easily shown to be