Guide Objective Assisted Particle Swarm Optimization and its Application to History Matching

trainerhungarianAI and Robotics

Oct 20, 2013 (3 years and 7 months ago)

41 views

Guide Objective Assisted Particle Swarm Optimization
and its Application to History Matching

Alan P. Reynolds
1
*,
Asaad

Abdollahzadeh
2
, David W. Corne
1
, Mike Christie
2
, Brian Davies
3

and Glyn Williams
3


1

School of Mathematical and Computer Sciences (MACS),
Heriot
-
Watt University, Edinburgh, Scotland

2

Institute of Petroleum Engineering (IPE),
Heriot
-
Watt University, Edinburgh, Scotland

3

BP

Optimization and separable problems

Fig
.

2
:

Standard

and

guided

PSO

updates,

minimizing

x
2

+

y
2
.

On

such

a

separable

problem,

the

best

values

for

x

(minimizing

x
2
)

and

y

(minimizing

y
2
)

provide

better

guidance

than

the

overall

best

solution
.

Motivation

References

1.
Kennedy,

J
.
,

Eberhart
,

R
.:

Particle

swarm

optimization
.

In
:

Proc
.

IEEE

Int
.

Conf
.

on

Neural

Networks
.

Vol
.

4
,

1942
-
1948

(
1995
)

2.
Kvasnicka
,

V
.
,

Pelikan
,

M
.
,

Pospichal
,

J
.:

Hill

climbing

with

learning

(an

abstraction

of

genetic

algorithm)
.

Neural

Network

World

6
(
5
),

773
-
796

(
1995
)

3.
Mohamed,

L
.
,

Christie,

M
.
,

Demyanov
,

V
.:

Reservoir

model

history

matching

with

particle

swarms
.

In
:

SPE

Oil

and

Gas

India

Conf
.

And

Exhibition
.

Mumbai,

India

(
2010
)

4.
Rosenbrock
,

H
.
H
.:

An

automatic

method

for

finding

the

greatest

or

least

value

of

a

function
.

The

Computer

Journal

3
(
3
),

175
-
184

(
1960
)

PSO and guide objectives

*A.Reynolds@hw.ac.uk

History

matching

is

the

improvement

of

parameterized

oil

reservoir

models

via

the

minimization

of

the

misfit

between

real

world

observations

and

those

obtained

through

simulation
.

We

wish

to

automate

the

history

matching

of

oil

reservoirs,

incorporating

reservoir

experts’

domain

knowledge

into

a

metaheuristic
.

However,

this

is

difficult

to

do

in

a

generally

applicable

way
.

We

note

that,

given

a

suitable

model

parameterization,

certain

model

parameters

will

affect

certain

misfit

components

to

a

greater

degree

than

others
.

This

suggests

that

the

problem

might

be

roughly

decomposed
,

with

subsets

of

misfit

components

being

used

to

create

a

guide

objective

for

subsets

of

the

model

parameters
.

We

show

how

PSO

can

be

adapted

to

use

both

the

guide

objectives

and

the

overall

objectives

in

a

single

optimization

run
.

Fig
.

1
:

The

PUNQ
-
S
3

case

study

If

we

have

a

separable

objective

function,

e
.
g
.

f(x,

y)

=

x
2

+

y
2
,

we

should

optimize

x

and

y

separately
.

Minimizing

f

directly

results

in

good

values

for

x

being

missed

when

coupled

with

poor

choices

for

y

and

vice

versa
.

Note

that

it

may

not

always

be

obvious

when

the

objective

can

be

separated,

e
.
g
.

f(x,

y)

=

x
4

+

2
x
2
y
2

+

y
4
.

Separate

optimization

is

also

appropriate

for

roughly

separable

problems,

e
.
g
.

minimizing

f(x,

y)

=

x
4

+

2
x
2
y
2

+

y
4

+

ε
x
3
y,

where

ε

is

small
.

A

near

optimal

solution

is

quickly

found

that

can

be

improved

further

by

optimizing

f

directly

if

desired
.

Here we refer to g(x) = x
2

and h(y) = y
2

as the
guide objectives
for x and y.

Basic PSO:

v
ij


wv
ij

+
α
r
1
(
p
ij



x
ij
) +
β
r
2
(
g
j



x
ij
) ,

v
ij

← min(
v
ij
,
V
max
, j
) ,

v
ij

← max(
v
ij
,
-
V
max
, j
) ,

x
ij


x
ij

+
v
ij

.

Velocity for particle
i
, component j.

Best position visited by
particle
i

(particle best)

Best position visited by
swarm (global best)

Particle position

v
ij


wv
ij

+
α
r
1
(
p
ij
(j)



x
ij
) +
β
r
2
(
g
j
(j)



x
ij
) .

Using guide objectives:

Particle best, according to guide
objective for decision variable j

Swarm best, according to
objective for decision variable j

Using guide objectives and the true objective:

v
ij


wv
ij

+
α
r
1
(
p
ij



x
ij
) +
β
r
2
(
g
j



x
ij
)

+
γ
r
1
(
p
ij
(j)



x
ij
) +
δ
r
2
(
g
j
(j)



x
ij
) .

Changing the values of
α
,
β
,
γ

and
δ

allows the influence of the guide objectives on
the search to be controlled.

This performs separate optimizations concurrently in a single run of PSO.

Test function

0
0.5
1
1.5
2
-10
-5
0
5
10
g(x)

x

Fig
.

3
:

The

highly

multimodal

function,

g(x),

of

Kvasnicka

et

al
.

Minimize:

20 variable
Rosenbrock

function

Roughly separable, with g(x
i
) acting as guide objective for x
i
.

3.55
3.60
3.65
3.70
3.75
0.2
0.5
0.8
1
1 → 0

Mean best
objective

Influence of guide objectives,
λ

Fig
.

4
:

Results

for

the

test

function,

with

95
%

confidence

intervals
.

Best

results

are

obtained

using

both

guide

objectives

and

the

true

objective
.

(Results

obtained

using

only

the

true

objective

are

of

considerably

poorer

quality

and

are

omitted

for

clarity
.
)

History matching: PUNQ
-
S3

Region

A

B

C

D

E

F

G

H

I

Guide wells

5

5, 12

5, 12

5, 12

4, 5, 12

1, 4, 15

1, 4, 11, 15

1, 11, 15

1, 11

Porosity

values

for

9

regions

in

5

layers

gives

45

decision

variables
.

The

guide

objective

for

a

variable

is

the

sum

of

misfits

for

a

subset

of

wells

associated

with

the

respective

region
.


Table

1
:

The

9

regions

in

the

PUNQ
-
S
3

reservoir

and

the

wells

most

likely

to

be

affected

by

changes

in

those

regions
.

1.5
1.7
1.9
2.1
2.3
2.5
2.7
0
0.2
0.5
0.8
1
1 → 0

Misfit (3000 evals.)

λ

2.5
3
3.5
4
4.5
5
0
0.2
0.5
0.8
1
1 → 0

Misfit (1000 evals.)

λ

Fig
.

5
:

Results

for

the

PUNQ
-
S
3

history

matching

problem

for

3000

and

1000

function

evaluations,

with

95
%

confidence

intervals
.