Oracle 11g new features: Automatic memory management and ...

harpywarrenSoftware and s/w Development

Dec 14, 2013 (3 years and 6 months ago)

58 views

O
r
ac
le
11g
n
e
w
f
e
atur
es
A
u
to
m
a
tic
m
e
m
or
y
m
a
n
ag
em
e
n
t
a
n
d
v
a
r
i
ab
le
autom
a
ti
c
s
to
r
a
ge
ma
na
g
e
me
n
t
(
A
S
M
)
e
x
t
e
n
t
s
i
ze
O
v
e
r
v
i
ew
.
...
...
...
...
...
...
...
...
...
....
2 
Soluti
on
con
Þ
g
u
r
a
t
i
o
n
..
...
...
...
...
...
...
...
....
3 
B
e
s
t
p
r
a
c
t
i
c
e
s
a
n
d
r
e
s
u
l
ts
...
...
...
...
...
...
...
....
4 
A
u
t
o
m
a
t
i
c
m
e
m
o
r
y
m
a
n
a
g
e
m
e
nt
.
...
...
...
...
...
....
4 
Be
ne
Þ
t
s
.
...
...
...
...
...
...
...
...
....
4 
I
m
p
l
e
m
e
n
t
i
ng
.
...
...
...
...
...
...
...
....
4 
M
E
M
O
R
Y
_
M
A
X
_
T
A
R
G
ET
...
...
...
...
...
....
4 
S
T
A
T
I
S
T
I
C
S
_
L
E
V
E
L
.
...
...
...
...
...
...
....
5 
M
o
n
i
t
o
r
i
ng
..
...
...
...
...
...
...
...
....
5 
V
a
r
i
a
b
l
e
A
S
M
e
x
t
e
n
t
s
i
z
e
s
.
...
...
...
...
...
...
....
7 
Be
ne
Þt
.
...
...
...
...
...
...
...
...
....
7 
I
m
p
l
e
m
e
n
t
i
ng
.
...
...
...
...
...
...
...
....
8 
M
o
n
i
t
o
r
i
ng
..
...
...
...
...
...
...
...
....
9 
C
o
n
c
l
u
s
i
on
...
...
...
...
...
...
...
...
...
....
12 
W
e
v
a
lue
y
our
f
eedbac
k
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
12
F
o
r
m
o
r
e
i
n
f
o
r
m
a
t
i
on
..
...
...
...
...
...
...
...
....
13 
H
P
t
e
c
h
n
i
c
a
l
r
e
f
e
r
e
n
c
es
.
.
...
...
...
...
...
...
....
13 
H
P
s
o
l
u
t
i
o
n
s
a
n
d
t
r
a
i
n
i
ng
..
...
...
...
...
...
....
13 
Ov
e
r
v
i
ew
O
r
acle
1
1g
has
m
a
ny
new
f
eatu
r
es
and
cap
a
b
i
liti
e
s.
This
p
r
o
j
e
ct
e
valuated a
s
u
bset
of
the
O
r
acle
11g
new
f
eatur
es
that
i
n
ßuence
HP
s
er
v
er
and
st
orage
d
ec
isi
o
ns:
a
u
t
o
m
at
ic
me
mo
ry
manag
e
ment
and
v
ar
iable
a
ut
omat
ic
s
t
o
r
age
manag
e
ment
(
A
SM)
e
x
t
e
nt
si
z
e
s.
T
his
pa
pe
r:
•
D
es
c
r
ib
es
each
f
e
ature
•
A
ddr
e
s
ses
the
bene
Þts of
e
a
ch
f
e
ature
•
E
xp
lains
h
ow
to
implement
e
ach
f
eatu
re
•
D
em
onstr
a
t
es
m
o
nitor
ing
t
e
c
hni
qu
es
f
or
e
ach
f
eatu
re
2
S
o
lution
con
Þ
gur
a
tion
All
t
esting
f
or this
p
r
o
j
ect
was
per
f
ormed
on a
tw
o-
node
O
r
acle 11g
R
AC
da
t
a
base
using
A
SM
on a
RHEL4
U
pdate 4
x86_64
oper
ati
ng
s
y
s
tem. A 1
TB
i
nd
u
s
tr
y-s
t
andard
online
t
r
a
nsac
tion
pr
oces
sing
(O
L
TP)
benc
hmark
a
pp
l
i
cation
w
as
c
r
e
ated in the
O
r
a
c
le 11g
data
bas
e.
3
Best
pr
ac
ti
ces
and
r
esults
A
u
t
o
matic
m
emo
ry
m
anageme
nt
Pr
i
or
to
1
1
g,
O
r
a
c
le
o
ff
er
ed
au
tom
a
tic
m
anag
ement
of
the
S
GA
and
P
GA
r
e
g
i
o
ns,
b
ut
e
a
ch
w
as
managed
i
ndep
e
n
de
n
t
l
y.
A
ut
o
m
at
ic
me
mo
ry
manage
me
nt
br
ings
t
he
m
anag
e
m
e
nt
of
the
S
GA
and
P
GA
r
egi
ons under a single
po
i
nt
of
c
on
tr
ol.
With
t
his
n
ew
f
e
atur
e, the
data
base
adminis
tr
ator
s
e
le
c
ts
the
t
o
t
al
a
mount
of
me
mo
ry
Or
acle
is
pe
r
m
it
t
ed
to
u
t
i
l
i
ze
on a
s
e
r
v
e
r, and
O
r
a
c
le
d
y
n
ami
c
al
ly
manages
the
d
is
tr
ibuti
on
of
that
m
em
o
ry
b
et
w
een the
S
GA
a
nd
P
GA
r
e
g
i
o
n
s.
No
te
M
e
mory
utili
z
a
ti
on
of the
S
GA
a
nd
P
GA
r
e
gions
is
managed
s
e
par
ate
ly
on
e
ach
node in a
R
AC
c
l
u
s
t
e
r. It is
l
i
k
e
ly that the
c
u
r
rent
r
egi
on
s
i
z
es and
the
indi
v
i
d
ual
S
GA
c
o
m
po
nent
si
z
es
will
v
ary
a
c
r
o
ss
n
odes.
Be
ne
Þts
•
C
ontin
u
ous
ly
m
e
a
s
u
r
es
m
em
ory
r
e
q
uir
e
m
ents
•
R
e
a
lloc
a
t
es
a
v
ailable
m
e
m
o
ry
w
ithout
d
o
wn
t
ime
and
w
ithout
m
anual
i
nt
er
v
e
nti
on
The
b
e
n
e
Þts
are
par
ti
c
u
lar
ly
a
d
v
an
tageous
in
an
e
n
v
i
ronment
su
pp
or
ting
het
e
r
o
geneous
w
or
kl
oads.
F
or
e
x
a
mple,
in
an
e
n
v
i
ronment
h
o
sti
ng
O
L
TP
and
D
SS
w
or
kloads
dur
ing
dif
f
er
ent
t
imes
of
d
a
y,
P
GA
r
e
g
i
on
u
se
will
v
a
r
y.
W
ith
aut
o
m
a
tic
m
em
ory
m
a
n
agement
e
nabled,
O
r
a
c
le
d
y
n
a
m
i
c
a
l
ly
m
o
v
es
m
e
m
o
ry
b
et
ween the
S
GA and
P
GA
r
e
g
i
ons,
o
p
tim
izing the
p
er
f
o
r
m
a
nce
of both
w
o
r
k
lo
ads.
Imp
l
ementing
By
de
f
a
ult, the
D
at
abase
C
o
n
Þ
g
u
r
a
ti
on
A
s
sistant
(
D
B
CA)
e
nables
au
tom
a
tic
m
em
ory
m
a
n
agement
w
hen
p
e
r
f
or
m
ing a
b
asic
ins
t
a
l
lati
on.
H
o
w
e
v
er,
if
y
ou
a
re
u
p
g
rading
f
rom an
e
a
r
l
i
er
r
elease
or
modi
f
y
ing an
e
x
is
t
ing
11g
d
at
a
base
c
o
n
Þ
g
u
r
ed
with
m
an
u
al
m
em
ory
m
ana
g
em
ent,
y
ou
m
u
st
e
x
pli
c
i
t
ly
s
et
the
MEMO
R
Y
_T
AR
G
ET
p
ar
a
m
et
er.
M
EMOR
Y_T
A
R
G
ET
is a
d
y
n
amic
par
ameter and
can
be
inc
r
eas
ed
up
to
the
v
alue
of
M
EMO
R
Y_MAX_T
A
R
G
ET
w
i
thout
i
nc
ur
r
ing
do
w
n
time.
M
E
MOR
Y
_MA
X
_T
A
R
G
ET
M
E
MOR
Y
_MA
X
_T
A
R
G
ET
is a
ne
w,
s
t
atic
par
ameter that
r
e
pr
e
s
ents the
m
aximum amount of
memory
the
data
base
i
ns
t
ance
can
u
se
w
i
thout
be
i
ng
r
e
s
t
a
r
t
ed.
It
w
i
ll
d
e
fault to the
value
of
M
E
MOR
Y
_T
AR
GET
if
not
e
x
pli
c
itly
s
e
t. HP
r
ecommends
setting
MEMO
R
Y
_MAX_T
AR
GET
to
the
hi
g
h
est
p
r
a
c
t
i
cal
v
a
lue
g
i
ven
y
o
ur
s
e
r
ver
r
e
s
o
u
rces
and other
m
em
ory
r
equ
i
r
e
m
e
nts.
B
est
pr
ac
ti
ce
Set
M
EMO
R
Y_M
A
X_T
A
R
G
ET
to the
h
i
g
h
e
st
pr
ac
ti
c
al
value
gi
v
en
y
o
ur
s
er
v
er
r
es
o
u
r
c
es and
o
ther
memory
r
e
q
u
ir
em
e
n
ts.
S
G
A_
T
A
R
G
ET
a
nd
P
G
A
_A
GG
RE
G
A
T
E
_
T
AR
GET
a
re
t
he
p
r
e
d
e
c
e
s
s
o
rs
of
ME
M
O
R
Y
_
T
A
R
GE
T.
Pr
i
or
to
t
he
i
n
t
r
o
duc
t
i
on
of
a
ut
o
m
at
ic
me
mo
ry
manag
e
me
nt:
•
S
G
A
_T
A
R
GET
r
e
p
r
e
sent
ed the
total
am
ount
of
m
e
m
o
ry
a
vailable
to
the
components
of the
S
G
A.
•
P
G
A
_
A
G
G
R
E
G
A
TE_T
A
R
G
ET
r
ep
r
e
sent
ed the
t
ar
g
et
a
m
ount
of
m
em
ory
s
har
ed
by
all
pr
o
c
es
s
es
a
t
ta
c
h
ed to
an
in
sta
n
c
e.
4
T
h
e
se
p
a
r
am
eter
s,
w
hen
s
et
using
au
tom
a
tic
m
em
ory
m
anag
em
ent,
s
e
r
ve
as
the
m
inimum
values
f
or
S
GA and
P
G
A.
By
a
s
s
i
gning
these
par
am
eters a
v
a
lue
g
r
e
ater than
0,
y
ou
e
nsu
re
that
O
r
a
c
le
wi
ll
n
e
ver
r
e
s
i
ze the
r
egi
ons
below
that
v
alue.
H
o
w
e
v
er,
u
nless
you
ha
ve done the
a
nal
y
sis
of
y
our
en
v
i
r
o
nment
and
ha
ve a
v
alid
r
eason
f
or
s
etting
these
par
am
et
er
s, it is
best
to
omit them and
a
l
low
O
r
a
c
le
to
d
y
n
ami
c
ally
allocate
memor
y.
B
est
pr
ac
ti
ce
Do
n
ot
s
et
S
G
A_
T
A
R
G
ET
and
P
G
A
_
A
G
G
RE
G
A
TE_
T
A
R
G
ET
w
i
thout
c
a
r
e
f
ul
a
nal
y
s
is
of
your
e
n
v
ir
on
ment.
ST
A
T
I
S
T
I
C
S
_
L
E
V
EL
S
T
A
T
I
S
T
ICS_LEVEL is the
par
am
eter
that
in
ß
uences
how
m
uch
i
nf
or
mation is
gathered about
y
our
s
y
stem to enable
O
r
acle to
m
a
ke
d
e
c
i
sions
on
m
em
o
ry
s
i
z
ing.
T
h
ere
a
re
thr
ee
p
ossible
values
f
or
this
p
ar
a
m
et
er:
A
LL,
T
YP
ICA
L,
and
B
A
S
IC.
Set
ting
S
T
A
T
I
S
T
IC
S
_LEVEL
to
B
A
SIC
dis
ables the
c
olle
c
t
i
on
of
d
a
ta
r
equir
ed
for
aut
o
m
a
tic
m
em
ory
m
a
nagem
ent and
will
c
ause an
O
R
A-
er
r
or
upon
i
nstance
s
tar
t
up if
au
tom
a
tic
m
em
ory
m
anag
ement
is
ena
b
l
e
d.
No
te
An
O
R
A-00
8
24
e
r
r
or
w
ill be
g
e
ner
a
t
ed
if
S
T
A
T
I
S
T
IC
S
_
LE
VEL=B
A
S
IC
and
M
E
M
O
R
Y
_T
A
R
G
ET > 0.
T
he
o
nly
d
iff
e
r
ence
b
et
w
een
TYP
ICAL
and ALL
is
the
amount
of
data
g
ather
e
d.
TYP
I
CAL
p
r
o
v
ides
b
e
tter
p
e
rf
ormance
a
nd,
in
m
o
st
c
a
s
e
s,
w
ill
gather
s
uf
Þ
c
i
ent data
for
O
r
acle to
acc
u
r
a
tely
a
ss
e
ss
the
me
mory
n
ee
ds.
B
est
pr
ac
ti
ce
Set
S
T
A
T
I
S
T
I
C
_LE
V
EL=T
YPICAL
f
or
b
est
p
er
f
o
r
m
anc
e.
Mo
n
i
t
o
r
i
ng
Q
u
ery
the
g
l
obal
v
i
ew
G
V
$M
EMOR
Y_
RE
S
I
ZE_
O
PS
to
m
onitor
the
d
y
nam
ic
m
em
ory
adj
u
s
tments
pe
rf
o
rmed
by
O
r
a
c
le
w
hen
a
ut
o
m
at
ic
me
mo
ry
manage
me
nt
is
e
n
able
d.
This
v
i
ew
c
on
t
i
n
u
o
u
sly
t
r
a
c
ks
the
l
ast
8
00
com
p
let
ed
r
esi
zing
o
per
a
tions
for
each
R
AC
i
ns
t
a
nc
e.
F
i
g
u
re
1
illu
s
t
r
a
t
es
an
e
x
ample
of
t
he
r
e
s
i
z
i
ng
o
p
e
r
a
t
i
o
n
s t
h
a
t o
c
c
ur
on
a s
i
n
g
l
e n
o
d
e d
u
r
i
n
g O
L
T
P t
e
s
t
i
n
g.
No
te
In a
s
ingle
i
ns
tance
e
n
v
i
r
onm
e
nt,
q
uery
v
i
ew
V$
MEMO
R
Y
_
R
E
S
I
Z
E_
O
P
S.
5
F
i
gure 1.
G
V
$M
E
MOR
Y
_RE
S
I
Z
E_OPS
l
ist
of
m
e
m
o
ry
r
esi
zing
oper
a
tions
Ea
ch row
in
F
i
g
u
re
1
lis
ts a
r
e
si
zing
ac
ti
v
i
t
y.
The
f
r
eq
uency
of
r
esi
zing
o
per
a
tions
s
ho
wn
here
is
t
y
pi
c
al
d
u
ring
the
initi
al
p
hase of a
n
ew
w
o
r
k
lo
ad.
After
j
ust a
f
ew
m
i
nu
tes
of
our
t
e
s
t
ing, the
m
e
m
o
ry
r
eg
i
ons
w
e
re
pr
o
p
er
ly
si
z
ed
and
r
e
sizing
o
p
er
a
t
i
ons
w
e
re
v
e
ry
inf
r
equ
e
nt.
E
v
e
ry
time the
P
GA
i
nc
r
e
a
s
ed,
the
S
GA, and
in
this
c
ase
s
pec
i
Þ
cally
the
D
E
F
A
ULT
b
uf
f
er
c
ac
he,
d
ec
r
e
as
ed.
In
other
w
o
r
d
s,
m
e
m
o
ry
w
as
t
aken
f
rom the
D
EF
A
U
LT
bu
ffer
c
a
c
he
component
of
the
S
GA
and
distr
i
buted
to the
P
G
A,
and
y
et
the
o
v
erall
m
em
ory
u
tili
z
a
tion
ne
v
er
e
x
c
e
e
ded
M
EMO
R
Y_T
A
R
G
E
T.
N
ote
that
a
ll
r
esi
zing
o
per
a
tions
o
cc
ur
r
ed
in
i
nc
r
e
m
ents
of
64
M
B. 64 MB is the
g
r
a
n
ule
si
ze
u
s
ed in
o
ur
co
n
Þ
g
u
r
a
tion and
r
epr
esents
the unit
of
allo
cation in
w
h
i
ch
S
GA
components
are
adj
u
s
t
ed
d
uring
au
tom
a
tic
m
em
o
ry
m
anag
ement
o
per
a
ti
ons.
G
ranule
s
i
ze
is
d
et
er
m
i
ned
by
the
platf
o
rm
r
unning
Or
a
c
le
and
t
he
S
GA
s
i
z
e
.
1
F
i
g
u
re
2
illu
s
t
r
a
t
es
output
f
r
om
the ipcs -m
c
o
m
m
and
w
hen
au
tom
a
tic
m
em
ory
m
anag
ement
is
ena
b
l
e
d.
F
i
g
u
re
2.
4K
s
h
a
r
ed
m
e
m
o
ry
s
e
gm
e
nt
w
hen
a
u
t
o
m
a
t
ic
me
mo
ry
ma
n
a
g
e
me
nt
is
e
n
a
b
led
N
o
te the 4
KB
shared
m
e
m
o
ry
seg
ment
o
w
ned
by
O
r
a
cle in
F
i
g
u
re
2
.
W
ith
a
ut
o
m
at
ic
me
mo
ry
m
a
nagem
ent
enabled,
s
har
ed
m
em
ory
s
eg
m
ents
a
re
s
t
ored in
Þ
les
in
the /dev/shm
Þle
s
y
st
em.
The
si
ze
of the
Þles
will
either
be 0
b
y
tes
or
“
g
r
an
u
le
s
i
z
ed”
b
yt
e
s.
O
r
a
c
le
r
equ
i
r
es
that
t
he/dev/shm
Þle
s
y
stem be
lar
ge
e
n
ough
to
s
u
pport the
m
ax
i
mum
p
o
s
sible
S
GA
s
i
z
e.
1
F
o
r f
u
r
t
h
er
i
n
f
o
r
m
a
t
i
o
n o
n g
r
a
n
u
le
s
i
z
e
,s
ee
Or
a
cle®
D
at
aba
se
A
d
m
i
nis
t
r
a
to
r
’s
G
uide
11g,
R
elease 1
(
11.1).
6
No
te
OR
A-
00
8
45
w
i
ll
be
g
enerated at
st
artup
if
M
EMO
R
Y_M
A
X_T
A
R
G
ET > /dev/shm
f
ree
space.
Wa
r
n
i
n
g!
In a
R
AC
e
n
v
i
r
onment
on
R
H
EL4
Update 4
x8
6_6
4,
the /dev/shm
Þle
s
y
s
tem is not
cleaned up after
i
n
s
t
a
n
ce
s
h
u
t
d
o
wn
on
node 1. If
there
is
i
nsuf
Þ
c
ient
f
r
ee
space
r
emaining in /dev/shm to
acco
m
m
odate
the
entire
S
GA, the
i
ns
t
ance
w
ill
be
un
a
ble
to
r
e
s
t
ar
t.
Or
ac
le
B
ug
6
8
2
0
9
87
addr
e
s
ses
this
i
s
s
ue.
One
w
o
r
k
a
r
ound
is
to
m
an
ual
ly
r
e
m
o
ve
the
Þles
in /dev/shm
pr
i
or
to
r
es
tar
ting
the
ins
t
ance.
Be
a
w
ar
e,
ho
w
e
v
e
r, that if
there
a
re multiple
ins
tances
h
os
t
ed
on
the
same
s
e
r
ver
(such
as
an
ASM
ins
t
ance),
t
here
may
be
a
c
t
i
ve
Þ
les
i
n
/dev/shm
.T
he
Þ
les
to
s
af
ely
r
em
o
ve
a
re
easily
i
d
enti
Þed
w
ith the
i
nstance
S
ID in the
n
am
e.
V
a
r
iable
A
SM
e
xtent
si
z
es
An
alloc
a
tion unit is
de
Þned
as
the
f
u
nda
m
en
t
al
u
nit
of
a
llocation
in a
disk
g
r
o
u
p
.
2
It is the
p
h
y
si
cal
s
i
ze
in
wh
i
ch
O
r
a
c
le
s
t
r
i
pes
d
ata
a
c
r
oss
t
he
L
U
Ns
of
an
A
SM
d
i
sk
g
r
o
u
p. An
e
x
tent
is
t
he
l
o
g
ic
al
unit
of
mea
s
ur
ement
t
hat
O
r
a
c
le
u
s
es
to
m
ana
ge
space
at a
dat
abase
Þle
le
v
e
l. A
n
ew
f
e
atu
re
in
O
r
acle 11g
is
the
a
b
i
l
i
ty to
d
y
na
mi
ca
l
ly
v
ary
the
si
ze of an
e
x
ten
t.
Be
ne
Þt
One
b
e
n
e
Þt
of
v
ar
i
able
s
i
zed
e
x
t
ents
is a
sm
a
ller
S
G
A.
O
r
a
c
le
u
s
es
an
A
SM
e
xtent
p
o
i
nter
ar
r
ay
st
ored
w
ithin the
S
GA
to
tr
ack
all
e
x
t
ents
a
ss
o
c
i
a
t
ed
with
the
dat
abase.
For
e
v
ery
extent
m
anag
e
d,
re
g
a
rd
l
e
ss
of
its
si
z
e,
the
A
SM
e
xtent
p
o
i
nter
ar
r
ay
i
nc
r
e
ases
by 8
b
yt
es.
T
her
e
f
o
r
e,
the
lar
ger the
si
ze of the
e
xt
ents,
the
f
e
w
er
t
here
a
re
to
m
anage
and the
s
m
aller
the
A
SM
e
xtent
p
o
i
nter
ar
r
ay
will
be.
T
a
b
le
1
illu
str
a
tes
the
diff
erence in
S
GA
g
r
o
w
th
with
and
w
ithout
v
ar
iable
s
i
z
ed
e
x
t
e
nts.
F
or
br
e
v
i
t
y,
only
thr
ee
AU
s
i
zes
are
s
ho
w
n.
2
Or
a
c
l

D
ata
b
a
se
S
t
o
r
age
A
d
mini
str
a
t
o
r
’s
G
uide
11
g,
R
e
l
e
ase
1, page
1-6
7
T
a
ble
1.
A
SM
e
x
t
e
nt
po
i
nter
a
r
r
ay
si
ze
Fi
le
s
ize
AU
s
ize
W
i
tho
ut
v
ar
i
a
ble
s
i
zed
ex
t
e
n
ts
W
ith
v
a
r
iable
si
z
ed
e
xtents
100 GB
1
MB
800 KB
2
37 KB
16 MB
50
KB
50
KB
64
MB
12
.5 KB
12
.5 KB
1 TB
1
MB
8192
KB
418 KB
16 MB
512 KB
201 KB
64
MB
128
KB
1
28
KB
10 TB
1 MB
8
0 MB
1
5
70
KB
16 MB
5
MB
370
KB
64
MB
12
80 KB
2
96
KB
1 PB
1
MB
8192
MB
1
28 MB
16 MB
512 MB
8
MB
64 MB
1
28
MB
2 MB
10 PB
1
MB
81920 MB
12
80
MB
16 MB
5120
MB
80 MB
64
MB
1
280
MB
20
MB
T
a
b
le
1
ill
u
s
tr
ates
how
the
diff
er
ence in
S
GA
g
r
o
w
th
b
et
ween
v
a
r
iable
si
z
ed
e
xtents and
s
t
a
tic
e
xtent
si
z
es
does
n
ot
v
a
ry
si
gni
Þ
cantly
until
the
data
base
r
eac
hes the
p
et
a
b
yte
r
ange.
In
the
la
st
e
x
a
mple
lis
ted, a
10 PB data
Þle
w
ith a
64
MB
AU
s
i
ze
r
eali
z
es a
9
8%
(
(
1
2
80
MB
- 20 MB) /
12
80
M
B)
sa
v
ings
in
S
GA
gr
o
wth
by
t
aking
ad
v
a
ntage
of
v
ar
i
a
b
le
s
i
z
ed
e
x
tents.
B
est
pr
ac
tice
M
a
ke
u
se
of
v
ar
i
a
ble
s
i
zed
e
x
ten
ts
f
or
dat
a
bas
es
in the
p
eta
byte
r
ange.
Imp
l
ementing
Y
ou
c
a
nnot
d
ir
ec
tly
set the
s
i
ze
of
an
e
xt
ent.
H
o
w
e
v
e
r,
y
ou
can
indir
e
c
t
ly
s
et
the
si
ze of an
e
x
t
ent
by
selec
ting
an
AU
s
i
ze
when
c
r
eating
an
A
SM disk
gr
ou
p.
T
he
f
oll
o
w
ing
S
QL
s
t
a
t
ement
il
lu
str
a
tes
the
c
r
e
a
ti
on
of a
disk
g
r
oup
u
sing
e
xt
ernal
r
e
d
undancy
w
ith a
64
MB
AU
s
i
z
e.
CREATE DISKGROUP FTS EXTERNAL REDUNDANCY
DISK ’ORCL:FTS1’ SIZE 102398 M ,
’ORCL:FTS2’ SIZE 102398 M ,
’ORCL:FTS3’ SIZE 102398 M ,
’ORCL:FTS4’ SIZE 102398 M
ATTRIBUTE ’compatible.rdbms’ = ’11.1’,
’compatible.asm’ = ’11.1’,
’au_size’ = ’64M’;
8
No
te
Set
c
o
m
patibl
e
.rdbms
and
co
mpatibl
e
.asm
to
Or
ac
le
R
e
lea
se
11
or
h
i
gher
to
e
na
ble
the
v
a
r
i
a
b
le
A
SM
e
xtent
s
i
ze
f
eatur
e.
T
he
e
xtent
s
i
ze
is
d
i
c
tat
ed
by
the
si
ze
of the
A
U, and
t
here
are
o
nly
t
hree
po
s
sible
si
z
es
for an
e
x
t
ent
in
any
s
ingle
d
ata
Þ
le.
T
a
b
le
2
depi
c
ts
the
simple
a
l
g
o
r
ithm
t
hat
d
eter
m
i
nes
the
e
x
t
ent
si
z
e.
Ta
b
le
2.
E
x
t
e
nt
g
r
o
w
th
a
l
g
o
r
i
t
hm
Num
b
er
of
e
x
te
nts
Extent
si
ze
<
20,000
AU
20,000 and
40
,000
AU x 8
4
0
,000
AU x
64
If
there
a
re
f
e
w
er
than
2
0
,
0
00
extents in a
single data
Þ
le,
the
e
x
t
e
nt
si
ze
is equal
to
the
AU
si
z
e.
On
ce
t
he
d
a
ta
Þle
g
r
o
ws
abo
ve
20
,
000
e
xtents
b
ut
f
e
wer than
4
0
,
0
00
e
x
t
e
nts, the
extent
s
i
ze
ju
m
ps
to
e
i
g
ht
t
im
es the
AU
s
i
z
e.
A
f
t
er
4
0
,
000
e
xtents
are
c
r
e
at
e
d,
the
e
x
t
ent
si
ze
inc
r
e
a
ses
to
and
r
e
m
a
i
ns at
64
t
i
mes
t
he
AU
si
z
e.
M
o
nit
o
r
ing
T
he
ASM
e
x
t
ent
po
inter
ar
r
ay
and the
v
ar
iable
ASM
e
x
t
ent
si
z
es
c
an be
monit
o
r
ed
w
i
th
t
wo
si
mp
le
SQL
q
u
e
ri
e
s.
U
s
i
ng
the
f
o
ll
o
wing
S
QL,
we
will
s
elect
d
a
ta
on
the
e
x
tent
m
a
p
p
ing
f
or a
single
data
Þ
le.
SELECT DISK_KFFXP, PXN_KFFXP, SIZE_KFFXP FROM
X$KFFXP
WHERE GROUP_KFFXP=1 AND NUMBER_KFFXP=268;
No
te
Sel
ect
G
R
OU
P_KFF
XP
and
NUM
BER_KFFXP
v
alues
that
a
re
a
ppr
op
r
i
ate
to
y
o
ur
e
n
v
ir
o
n
ment.
9
Fi
g
u
re
3.
E
x
t
e
nt
m
apping in
X
$
K
F
F
XP
No
te
See
M
etaLink
N
o
te
3
5111
7
.1
for
u
s
e
ful
q
u
er
i
es
a
g
a
inst
X$KFFXP.
F
i
g
u
re
3
illus
t
r
a
t
es
the
output
f
rom the
Þ
r
st
q
uer
y.
The output
r
e
v
e
a
ls
not
o
n
ly the
e
v
en
s
tr
iping
a
c
r
o
ss
A
SM
d
isk
s,
but also the
AU
s
i
ze
and
e
x
t
ent
si
ze
g
r
o
wth as the
n
umber
of
e
xtents
inc
r
eas
e
s.
The
Þ
r
st
column
r
epr
e
s
e
n
ts
the
A
SM
disk
n
umber.
Note
how
the
e
x
t
ents
a
re
w
ritten
in a
round
r
o
b
in
f
ashion
ac
r
o
ss
f
our
disks
(
L
UNs)
in
our
disk
gr
ou
p.
T
he
second
column
t
r
a
c
ks
the
s
e
q
u
en
tial
num
b
ering
of
e
ach
e
xtent.
T
he
t
hird
column lists
the
AU
si
ze and
v
er
i
Þes the
e
xtent
g
r
o
wth.
The
Þ
r
st
1
9
,999
ex
t
e
n
ts
are 1
MB
in
s
i
ze.
A
c
c
o
rd
i
ng
to
t
he
a
l
g
o
r
i
t
hm
in
T
a
b
le
2
,t
h
i
s is
t
he
AU
s
i
ze
of this disk
g
r
ou
p. At
e
x
t
ent
n
u
m
ber
2
0
,
0
0
0,
the
e
x
t
ent
si
ze
inc
r
e
ased
to 8
MB.
At
extent
n
umber
4
0
,
0
00 the
e
xtent
s
i
ze
i
nc
r
e
as
ed to 64
MB.
10
T
he
c
ur
r
ent
si
ze
of the
ASM
p
o
int
er
a
r
r
ay
c
an
be
m
onit
ored
w
ith the
f
ollo
w
ing
S
QL:
SELECT INST_ID, BYTES FROM
GV$SGASTAT
WHERE NAME = ’ASM extent pointer array’;
No
te
In a
s
ing
le
i
ns
ta
nce
e
n
v
ir
onment
qu
e
r
y,
v
i
ew
V$S
G
A
S
T
AT
a
nd omit
INS
T
_
ID
f
r
om
the
qu
e
r
y.
11
Co
n
c
l
u
s
i
on
O
r
acle
11g
o
ff
ers
m
any
n
ew
f
e
atu
r
es.
This
paper
hi
g
h
li
g
hted
t
wo
of
the
new
f
e
a
tur
es:
•
A
ut
o
m
at
ic
me
mo
ry
manage
me
nt
•
V
ar
iable
A
SM
e
x
t
ent
si
z
es
It is
impor
tant
for
y
ou
to
f
ully
under
stand
any
n
ew
f
e
ature
b
e
f
ore
i
mplemen
ting
it
i
n
to a
p
r
oduc
tion
en
v
i
r
o
nm
ent.
T
he
d
es
c
r
ip
tion of
e
a
ch
f
e
atu
r
e, along
with
the
bene
Þts and
i
mplemen
t
a
t
i
on
s
t
e
ps,
p
r
o
v
i
des
y
ou
the
inf
o
r
m
ati
on
needed
to
d
ec
i
de
whether the
n
ew
f
e
atur
es
are
a
ppr
opr
iate
for
y
our
en
v
i
r
o
nm
ent.
We
v
a
l
ue
y
o
ur
f
ee
dback
In
order
to
d
e
v
elop
t
e
c
hnical
m
at
er
i
als that
add
r
ess
y
our
inf
o
r
m
ati
on
n
eeds,
we
need
your
f
eedbac
k.
We
a
ppr
ec
i
a
te
y
our
time and
value
y
our
op
ini
o
n.
T
he
f
ollo
w
ing link
w
ill
t
a
ke
y
ou
to a
short
s
ur
v
ey
re
g
a
rd
i
ng
the
q
u
ali
ty
of
this
p
a
p
er:
h
t
t
p
:
/
/
h
p
w
e
b
g
e
n
.
c
o
m
/
Q
u
e
s
t
i
o
n
s
.
a
s
p
x
?
i
d
=
1
2
0
4
6
&
p
a
s
s
=
4
1
5
14
12
For more information
HP technical references
HP solutions and training
•
HP StorageWorks Customer Focused Testing
•
http:
/
/
www.hp.com
/
g
o
/
hpcft
•
HP ORACLE Solutions
•
http:
/
/
www.hp.com
/
g
o
/
ORACLE
•
HP data storage and HP StorageWorks products
•
http:
/
/
www.hp.com
/
g
o
/
storage
•
HP Blade servers
•
http:
/
/
www.hp.com
/
g
o
/
blades
•
HP ProLiant servers
•
http:
/
/
www.hp.com
/
g
o
/
proliant
© 2008 Hewlett-Packard Development Company, L.P. The information contained
herein is subject to change without notice. The only warranties for HP products
and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or
omissions contained herein.
Oracle is a registered trademark of Oracle Corporation and/or its afÞliates.
4AA1-5662ENW, March 2008
13