129
Modeling Seismic Wave Propagation Using
Graphics Processor Units (GPU)
Zhangang Wang
1
, Suping Peng
1
, and Tao Liu
2
1
State Key Laboratory of Coal Resources and Mine Safety, China University of Mining and Technology, Beijing, China
Email: millwzg@163.com
2
School of Earth and Space Sciences, Peking University, Beijing, China
Email: liuluot@126.com
Abstract—The main drawback of the seismic modeling in
2D viscoelastic media on a single PC is that simulations
with large gridsizes require a tremendous amount of
floating point calculations. To improve computation
speedup, a graphic processing units (GPUs) accelerated
method was proposed using the staggeredgrid finite
difference (FD) method. The geophysical model is
decomposed into subdomains for PML absorbing
conditions. The vertex and fragment processing are fully
used to solve FD schemes in parallel and the latest updated
frames are swapped in Framebuffer Object (FBO)
attachments as inputs for the next simulation step. The
simulation program running on modern PCs provides
significant speedup over a CPU implementation, which
makes it possible to simulate realtime complex seismic
propagation in high resolution of 2048*2048 gridsizes on
lowcost PCs.
Index Terms—seismic, wave propagation, finite difference,
viscoelastic media, model, GPU
I.
I
NTRODUCTION
The main drawback of the FD method is that
simulations with large model spaces or long
nonsinusoidal waveforms can require a tremendous
amount of floating point calculations and run times. In
recent years the spatial parallelism on highperformance
PC clusters makes it a promising method for seismic
numerical computing in realistic (complex) media [1, 2].
Modern GPUs have now become ubiquitous in
desktop computers and offer an excellent costto
performanceratio, exceeding the performance of general
purpose CPU by many times, due to their powerful and
fully programmable parallel processing architectures. In
the past few years, programmability of GPU and
increased floating point precision has allowed GPU to
perform general purpose computations [3, 4].
In this article, we have proposed a new
implementation under the current GPU architecture to
improve the realtime FD simulation of wave
propagation in viscoelastic media, including domain
decomposition and processing techniques.
II.
T
HEORY
Early researches on FDM for elastic wave modeling in
complex media gradually formulated finite difference
schemes based on a system of highorder coupled elastic
equations [58]. In such studies, however, the earth’s
viscosity has been ignored and synthetic seismograms
fail to model attenuation and dispersion of seismic waves.
Day and Minster [9] made the first attempt to incorporate
anelasticity into a 2D timedomain modeling methods
by applying a Pade approximant method. An efficient
approach [10] was proposed based on the rheological
model called “generalized standard linear solid” (GSLS),
which was shown to explain experimental observations
of wave propagation through earth materials [11]. Then a
staggered grid finite difference technique was proposed
based on the GSLS to model the propagation of seismic
waves in 2D/3D viscoelastic media [12], for staggered
grid operators are more accurate than standard grid to
perform first derivatives for high frequencies close to the
Nyquist limit [8].
A. Staggeredgrid Finite difference formulation
The firstorder velocitystress equations of viscoelastic
wave propagation are given by
( )
( ) 2
( ) 2
1
[ ( 1)( ) 2 ( 1) ]
1
[ (
s
xy y
x
xy
p s
y y
xx x
xx
p s
yy y
x x
yy
p s
y y
xx x
xx
p
yy
yy
v
v
t y x
v v
v
t x y y
v
v v
t x y x
v v
v
t x y y
t
ε
σ
ε ε
σ σ
ε ε
σ σ
ε ε
σ σ σ
ε
σ σ
σ
τ
μ γ
τ
σ τ τ
π μ γ
τ τ
σ
τ τ
π μ γ
τ τ
γ τ τ
γ π μ
τ τ τ
γ
τ
γ π
τ τ
∂ ∂
∂
= + +
∂ ∂ ∂
∂ ∂
∂ ∂
= + − ⋅ +
∂ ∂ ∂ ∂
∂ ∂
∂ ∂
= + − ⋅ +
∂ ∂ ∂ ∂
∂ ∂
∂ ∂
= − + − + − − ⋅
∂ ∂ ∂ ∂
∂
= − + −
∂
1)( ) 2 ( 1) ]
1
[ ( 1)( )]
1
[ ]
1
[ ]
s
y
x x
s
xy y
x
xy
xy
x xx
x
y xy yy
y
v
v v
x y x
v
v
t y x
v
f
t x y
v
f
t x y
ε
σ
ε
σ σ
τ
μ
τ
γ
τ
γ μ
τ τ
σ
σ
ρ
σ σ
ρ
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
∂
∂ ∂
⎪
+ − − ⋅
⎪
∂ ∂ ∂
⎪
∂ ∂⎪
∂
= − + − +
⎪
∂ ∂ ∂
⎪
⎪
∂
∂ ∂
⎪ = + +
∂ ∂ ∂
⎪
⎪
∂ ∂ ∂
⎪
= + +
⎪
∂ ∂ ∂
⎩
(1)
The moduli π, μ can be expressed as
0
2 2
0
0
1
1
1
p
p
v R
iw
iw
σ
ε
σ
π ρ
τ
τ
τ
=
+
+
,
0
2 2
0
0
1
1
1
s
s
v R
iw
iw
σ
ε
σ
μ ρ
τ
τ
τ
=
+
+
(2)
Where v
P0
, v
S0
denote the phase velocity at the centre
frequency of the source(w
0
)
for P and Swaves,
respectively. The symbol R denotes the real part of the
complex variable. The constants of stress relaxation
times for both Pand Swaves, can be calculated by
quality factor Q and angular frequency w.
ISBN 9789525726091 (Print)
Proceedings of the Second International Symposium on Networking and Network Security (ISNNS ’10)
Jinggangshan, P. R. China, 24, April. 2010, pp. 129132
© 2010 ACADEMY PUBLISHER
APPROCCS10CN006
130
A secondorder centered difference scheme is applied
to approximate the time derivatives, and a fourthorder
staggered scheme with centered differences to
approximate the spatial derivatives. From Eqs.1, for
examples,
n
x
y
σ
+
，
n
x
y
r
+
and
1n
x
v
+
can be approximated as:
(,)
(,) (,) (,)
(,) (,) ( )
(,)
( (,) (,))
2
n
p n
yn n
x
xy xy
n n
xy xy
v i j
i j i j v i j
i j i j
y xi j
r i j r i j
ε
σ
τπ τ
σ σ
τ
τ
+ −
+
+
+ + + + +
+ + + +
+ +
−
∂
⋅ ∂
= + +
∂ ∂
+ +
(5)
1
(,) (1 ) ((1 ) (,)
2 (,) 2 (,)
(,)
(,) (,)(,)
( 1) ( ))
(,) (,)
n n
xy xy
n
s n
y
x
r i j r i j
i j i j
v i j
i j v i ji j
y xi j i j
σ σ
ε
σ σ
τ
τ
τ τ
ττ μ
τ τ
+
+ + − − + +
+ + + +
+
+ + +
+ +
+ + + +
= + − ⋅ −
∂
∂⋅
− ⋅ +
∂ ∂
(6)
1
(,)
(,)
(,) (,) ( )
(,)
n
n
xyn n n
xx
x x x
i j
i j
v i j v i j f
y xi j
σ
στ
ρ
+
+
+
+ +
+ + +
+
∂
∂
= + + +
∂ ∂
(7)
Where i, j, k, n are the indices for the three spatial
directions and time, respectively.
τ
denotes the size of a
timestep.
The discretion of the spatial differential operator, for
example,is given:
(,) 1
( 1 ( ( 1,) (,)) 2 ( ( 2,) ( 1,)))
v i j
c v i j v i j c v i j v i j
x l
+
∂
= ⋅ + − + ⋅ + − −
∂
(8)
Where l denotes grid spacing and c1, c2 denote the
differential coefficients.
B. PML absorbing boundary condition
In order to simulate an unbounded medium, an
absorbing boundary condition (ABC) must be
implemented to truncate the computational domain in
numerical algorithms. Since the highly effective
perfectly matched layer (PML) method [13] for
electromagnetic waves was proposed, the PML has been
widely used for finitedifference and finiteelement
methods [14]. The effectiveness of the PML for elastic
waves in solids and the zero reflections from PML to the
regular elastic medium were proved [15].
III.
I
MPLEMENTATION ON
GPU
S
A. Overview
To compute the staggeredgrid FD and PML equations
on graphics hardware, we divide the geophysical model
into regions represented as textures with the input and
output parameters. Fig.1 shows the division of a 2D
model into a collection of regions. Two sets of
alternating textures binding with a FBO are used to
represent the parameters for computing frame at time t
and t+1 respectively. All the other inputonly variables
are stored similarly in 2D textures. To update FBO
textures at each time step, we render quadrilaterals
(regions) mapped with the textures in order, so that a
fragment program running at each texel computes the FD
equations. After each frame, FBO attachments are
swapped and the latest updated textures are then used as
inputs for the next simulation step.
B. Domain decomposition
The computational domain is divided into an interior
region and PML regions. The outgoing waves are
absorbed by the PML via high attenuation. For 2D model,
the size of the interior region is N1*N2, denoting the
interior part of the whole textures. PML region covers 8
blocks and can be divides three types based on the
computing process: PML1, processing absorbing
condition both in x and z direction, with the size
NBC*NBC; PML2 only in z direction and PML3 only in
x direction. The vertex processors will transform on
computational domains and the rasterizer will determine
all the pixels in the output buffer they cover and generate
fragments.
In our approach, velocity and stress fields are
staggered in spatial and time domain. Therefore, the
velocity field is updated after the stress field is updated
and the priority of subdomains processing is interior
region, PML2 (3), PML3 (2), PML1.
Fig.1. Implementation of staggered grid FD method with PML conditions on GPUs. Vertex processing switches computing regions and render
quadrilaterals (regions) mapped to textures; fragment program run at each texel for computing frame at time t and t+1. FBO attachments are swapped
and the latest updated textures are then used as inputs for the next simulation step (loops go on).
C. Data representation
For 2D viscoelastic FD schemes, computation on
interior region at each frame is to get P
X
, P
Z
, P
XZ
, V
X
, V
Z
,
R
X
, R
Z
and R
XZ
;
computation on PML regions needs two
extra components for P
X
, P
Z
, P
XZ
, V
X
or V
Z
on each of 4
absorbing
boundaries. Considering the time t and t+1,
there are actually (8+5*2*4)*2 = 96 arrays storing these
components to run in the whole process.
(A) Vertex computational region
S
w
ap
V
z
Texture
[1N]
V
x
Texture
[1N]
P
Texture
[1N]
Frame t
Frame t+1
VzTexture[N]
V
x
Texture
[N]
P
Texture
[N]
VZ_FBO
N
=1

N
ReadOnlyWriteOnly
VX_FBO
P_FBO
（B）Fragment processing
X
Z
0
PML_2
N2=Height2*NBC
N1=Width 2*NBC
PML_2
Interior Region
PML_1
PML_1PML_1
PML_1
PML_3 PML_3
NBC
131
In order to reduce to the number of textures and to
switch computational domain quickly, 16 textures are
used to store interior stressvelocity parameters, called
whole texture with the size width*height, width and
height denoting gridpoint numbers in x and z direction
respectively. It equals the size of the rendering viewport.
The input seismic parameters Vp, Vs, Qp, Qs, ρ are also
stored in whole texture. 5*2 extra textures, called x
directiontexture, are used to express all velocity and
stress parameters in x absorbing
direction; and the size of
each texture is (2*NBC, height), NBC defining the
gridpoints of the absorbing boundary. In our work, NBC
is 10. Simultaneously, there are 5*2 extra arrays in z
absorbing
direction, called zdirectiontexture, and the
size is (width, 2*NBC).There are actually another 20
arrays at time t+1.
From above analysis, the memory requirements
MEMREq for 2D viscoelastic modelling is obtained by,
22
1024/)**( FzDNBCNCNMEMREg ∗+∗=
(11)
where the total number C of the wholetexture is 21
and the total number D of xand z direction texture is 40.
Fz of float size equals 4 bytes.
TABLE
I
M
EMORY
R
EQUIREMENTS FOR
2D
V
ISCOELASTIC
M
ODEL
Gridsize( N
2
) Memory Requirements
512
2
22.56M
1024
2
87.12 M
2048
2
342.25 M
4096
2
1356.50 M
Table1 shows 2D viscoelastic model which is only less
than 2048*2048 gridsizes can be supported on current
GPUs directly, since graphics video memory size is about
256512M in general.
D. Fragment processing
The results of each step are written to a two
dimensional memory location called framebuffer. But the
characteristics of the GPU prohibit the usage of the buffer
for reading and writing operations at the same time.
Fortunately, the OpenGL extension FrameBuffer Object
(FBO) allows reusing the framebuffer content as a texture
for further processing in a subsequent rendering pass and
the “pingpong" method allows two buffers swap their
role after each write operation [3]. In order to generate
and store intermediate results for each simulating step on
GPUs, our strategy is to bind each texture that is rendered
to onto one attachment on an assigned FBO; two textures
binding on the same FBO are used to represent one
stressvelocity component at time t and t+1, respectively.
After each frame, FBO attachments are swapped and the
latest updated textures are then used as inputs for the next
simulation step while N = 1 N (N = 0, 1).
IV.
N
UMERICAL
E
XAMPLES
We have experimented with our GPU accelerated
method implemented using C++/OpenGL and Cg shader
language on a PC with Intel Core(TM)2 Duo E7400
2.8GHz, 2G main memory. The graphics chip is NVidia
GeForce 9800GT with 512M video memory and 550MHz
core frequency. The OS is Windows XP. All the results
related to the GPU are based on 32bit singleprecision
computation of the rendering pipeline. For comparison,
we have also implemented a software version of the
seismic simulation using singleprecision floating point,
and we have measured their performance on the same
machine.
Two samples are used. One is a threelayer model
(model1) and another is similar to Kelly model
[14]
(model2). The source is located at the top of the medium
(Fig.2) and propagates downwards ricker wave with a
dominant frequency of approximately 50 Hz.
(a) Model1: Three layer model
①Vp=1200 Vs=800 Qp=80 Qs=60 ρ=1.4
Vp=2440② Vs=1330 Qp=100 Qs=90 ρ=2.2
③Vp=3210 Vs=2389 Qp=219 Qs=190 ρ=2.3
④Vp=3500 Vs=2173 Qp=200 Qs=170 ρ=2.5
⑤Vp=3225 Vs=1800 Qp=294 Qs=220
ρ=2.5
⑥Vp=3822 Vs=2980 Qp=490 Qs=340 ρ=2.6
⑦Vp=4785 Vs=3800 Qp=450 Qs=400
ρ=3.4
(b) Model2: Modified Kelly model
Fig.2. two test models: both are the same size of 2500m*2500m.
The computation runs 2000 steps on both GPU and
CPU, respectively. The results for the two models are
plotted in Fig.3. The time of viscoelastic wave simulation
is defined as a function of the grid size. In order to
compare the performance, GPU based method do not
transfer the velocitystress field or the parameters
between the main memory and the graphics memory. The
time spent on advecting and rendering the regions is
negligible with respect to the simulation. The time of
staggeredgrid FD method is only dependent on the grid
size, independent on geology structure complexity, which
is shown in both GPU (Fig. 3a) and CPU (Fig.3b) based
implementation.
The performance of the GPU based implementation
increases significantly with the increasing number of the
gridpoints. The speedup factor, which is defined as the
ratio of CPU’s time to GPU’s time on the same grid size,
increases from about 2 (128
2
) to 50 (2048
2
). That is
mainly due to the advantages of GPU’s parallel
Vp =2600m/s Vs =1800 Qp =120 Qs= 110 ρ=
2 4g/cm
3
Vp = 3050 Vs = 2400 Qp = 230 Qs= 220
ρ
= 2 7
V p = 4 0 0 0 V s = 3 3 0 0 Q p = 3 5 0 Q s = 2 6 7
ρ
=3 0
2500
2500
m
②
⑤
⑦
①
④ ③
⑥
132
0
2000
4000
6000
8000
10000
1 2 3 4 5
Time(s)
Model1
Model2
0
40
80
120
160
200
1 2 3 4 5
Time(s)
Model1
Model2
128
2
256
2
512
2
1024
2
2048
2
128
2
256
2
512
2
1024
2
2048
2
computing, and the staggered grid FD method is
amenable to an implementation by the rendering pipeline.
For the simulation only runs in several minutes,
geophysicists can use the new method to visualize the
each frame of the computing process in real time, which
helps to analysis the wave propagations easily.
Fig.3. Comparative Analysis of GPU and CPU based staggeredgrid FD method: (a) GPU based method using model1 and model2, (b) CPU based
method using model1 and model2.
On the other hand, GPU based method improves more
requirements of graphics card to use the programmable
shaders. Now, only the latest nVidia or ATI card
(Geoforce 6 above) can support the method. In additon,
the available grid size is limit by the GPU texture
maximum size. Now, the texture maximum size is 4096
2
,
so difference grid will not surpass the maximum size and
the simulation accuracy is restricted. Graphics video
memory size is about 256512M in general, which limit
the computing resources and only less than 2048
2
grid
sizes can be supported directly.
V.
C
ONCLUSION
We have presented a GPU accelerated staggeredgrid
finite difference method to simulate the seismic
propagation in 2D viscoelastic media model with
complex earth structure geometry. The physical model is
divided into subdomains for PML absorbing conditions
and the implementation is totally implemented using the
current fragment and vertex processing stages on GPUs.
The simulation program runs on Modern PCs with
popular lowcost GPUs, yet it is much faster than CPU
based simulation. Our experimental results have shown
that the GPU version is significantly faster (for example,
350 times faster) for large grid sizes of 2D Models. The
method makes it possible to simulate realtime complex
seismic propagation in high resolution of 2048*2048
gridsizes on lowcost PCs.
A
CKNOWLEDGMENT
The work was supported by China Postdoctoral
Foundation (No.20090450466) and the National Key
Basic Research and Development (973) Program of
China (Grant No. 2010CB732002, 2009CB724601,
2005CB221505).
R
EFERENCES
[1] T. Bohlen, “Parallel 3D viscoelastic finite difference
seismic modelling,” Computers & Geosciences, vol. 28,
no.8, pp.887–899, October 2002
[2] W. Sun, J.W. Shu, and W. Zheng, “Parallel Seismic
Propagation Simulation in Anisotropic Media by Irregular
Grids Finite Difference Method on PC Cluster,” Gervasi et
al. (Eds.): ICCSA 2005, LNCS 3483, 2005, pp.762 – 771
[3] J. Owens, D. Luebke, N. Govindaraju, M. Harris, J. Kruger,
A. Lefohn and T. Purcell, “ A survey of generalpurpose
computation on graphics hardware,” In Eurographics 2005,
State of the Art Reports, pp. 2151.
[4] H. Mark, “Mapping Computational Concepts to CPUs,”
GPU Gems2, AddisonWesley. NVIDIA, 2005, Chapter 47,
pp.493508
[5] J. Virieux, “PSV wave propagation in heterogeneous
media: velocitystress finitedifference method,”
Geophysics, vol. 51 issue 4, 1986, pp.889–901.
[6] K.R Kelly, “Numerical study of love wave propagation,”
Geophysics, vol.48, issue 7, 1983, pp.833–853
[7] R. W. Graves, “Simulating seismic wave propagation in
3D elastic media using staggered grid finite differences,”
Bull. Seism. soc. Am., vol. 86, 1996, pp.1091–1106
[8] G. Kneib and C. Kerner, “Accurate and efficient seismic
modeling in random media,” Geophysics, vol. 58, issue 4,
1993, pp.576–588
[9] S.M. Day and J.B. Minster, “ Numerical simulation of
wavefields using a Pade approximant method,”
Geophysical Journal of the Royal Astronomical Society,
vol. 78, 1984, pp.105–118
[10] H. Emmerich, M. Korn, “Incorporation of attenuationinto
timedomain computations of seismic wave fields”.
Geophysics, vol. 52, issue 9, 1987, pp.1252–1264.
[11] H.P. Liu, D.L. Anderson and H. Kanamori, “Velocity
dispersion due to anelasticity: implications for seismology
and mantle composition,” Geophysical Journal of the
Royal Astronomical Society, vol. 47, 1976, pp.41–58
[12] J.O. A , Robertsson, J.O. Blanch, and W.W. Symes,
“Viscoelastic finitedifference modeling,” Geophysics, vol.
59, issue 9, 1994, pp.1444–1456.
[13] J.P. Berenger, “A perfectly matched layer for the
absorption of electromagnetic waves,” Journal of
Computational Physics,
vol. 114, 1994, pp.185–200
[14] F. D. Hastings, J. B. Schneider, and S. L. Broschat,
“Application of the perfectly matched layer (PML):
absorbing boundary condition to elastic wave
propagation,” J. Acoust. Soc. Am, vol. 100, 1996 ,
pp.3061–3069
[15] W.C. Chew and Q.H. Liu, “Perfectly matched layers for
elastodynamics: a new absorbing boundary condition,”
Journal of Computational Acoustics, vol. 4, 1996, pp.72–
79
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο