Volume Visualization of Visible Korean Human (VKH)

creepytreatmentΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 3 χρόνια και 7 μήνες)

63 εμφανίσεις

Volume Visualization of Visible Korean Human (VKH)
Dataset on CAVE



Hyung
-
Seon Park
b
, Joong
-
youn Lee
a
, Minsu Joh

a
,

Min
-
Suk Chung
d
,

Young
-
Hwa Cho
b
,

Insung Ihm
c


a

Supercomputing Center, KISTI, Daejeon, Korea

b

Bioinformatics Center, KISTI, Daejeon, Korea


c

Dept. of Computer Science, Sogang Univ., Seoul, Korea

d
Dep
.

of Anatomy, School of Medicine
,

Ajou Univ
.,

Suwon, Korea




ABSTRACT


Volume visualization is a research
area

on various technique
s

that
helps

generating
meaningful

and visual information fro
m two
-

or higher
-
dimensional volume data
set
. It
has been increasingly important in
the field of
meteorology, medical science and
computational fluid
dynamics, etc
. On the other hand, virtual reality is a research field
that deals with
various technique
s
th
at
c
apable

of experiencing the contents

in the
virtual world with visual, auditory and tactile sense. Recently
,

there are many studies
going on about virtual reality worldwide.



We
develop
ed

a visualization system for CAVE that produces the stereoscopic
i
mages from
the
huge
VKH
volume data in real
-
time using an improved volume
visualization technique. CAVE is an innovative Immersive

3D virtual environment
system, which was developed in EVL

(Electronic Visualization Laboratory)

at the early
90's. Our system

utilize
d

an image
-
based rendering technique
and 2D
-
texture mapping
hardware
for real
-
time stereoscopic volume visualization since the current 3D
-
texture
mapping
technique
is too

massive in operation.

The system offers various function
s of
user interface f
or visualizing the
V
isible Korea
Hum
an data. In this paper,
the

visualization technique for real
-
time stereoscopic image generation

and implementation
of that technique is described.


Keywords:

Volume visualization, Virtual reality, CAVE, V
isible Korea
Hu
m
an
,
S
tereoscopic

image


1. INTRODUCTION

AND MOTIVATION

Computer graphics is a research field on various technique that generate various
images using computer, and it is grown very rapidly for the last
few

decades. The photo
realistic visualization technol
ogies based on computer graphics are utilized on many
fields.
Volume visualization
, the most representative technology on computer graphics,

is a research field on various technique that helps generating meaning and visual
information from two
-

or higher
-
d
imensional volume data

set. The volume data is the
sheer bulk of data, usually multivariate, produced by magnetic resonance imaging
(MRI), computed tomography (CT) or scientific computing model for medical science,
meteorology, computational fluid dynamics
, etc.

Virtual reality
is a research field
o
n various techniques that aid gaining experience in
the virtual world with visual, auditory and tactile sense.
The virtual world is generated
with computer graphics technology, and the world is not static, but r
esponds to users
input.
Recently there are many studies going on about virtual reality worldwide.

CAVE
is
a
projection
-
based
immersive

virtual environment system,
which

was developed
at

electronic visualization laboratory (EVL) in the university of
Illinoi
s

at
Chicago

at the
early 90's
[1]. This system is first announced at SIGGRAPH

92, and it is one of the
most i
mmersive

virtual
environments

in the world until now. CAVE is the cube shaped
VR system and users can view
stereoscopic

image with three
dimension
al

shutter
glasses. Users also can manipulate the virtual world interactively via 3D wand in the
CAVE. This
innovative

virtual environment system is used in many application field
such as
military
, science, aerospace, automobile, medical science and so on.

Nowadays,
there is 100+ CAVE systems in the many companies, universities, museums and
research institutes over the world. The Supercomputer Center at KISTI has equipped the
CAVE system, the first installation made in
Korea
year
2001
, and it is used for
vi
sualization of various scientific data sets such as meteorology, computational fluid
dynamics, molecular dynamics and medical science.

The KISTI and Ajou University created CT, MRI and RGB datasets
of Korean

old
man, and
the data
is named V
isible Korea
Hu
m
an
(V
KH
)[10].
The
MR, CT, and
anatomical images were acquired. Length of the cadaver was 1,718 mm and interval of

the
MR and CT images was 1 mm, so that 1,718 sets of

MR and CT

images

were
acquired. Each

cropped image

had 505 X 276 resolution, 8 b
it (b)

g
ray color,
and
769
k
b

file size.
The l
ength of
the
cadaver was 1,718 mm and
the
interval of
the
anatomical
images was 0.2 mm, so that 8,590 anatomical images were acquired. Each anatomical
image had 3,040 X 2,008 resolution, 24 b colors,

and

17,890 k
b

file

size
.
Several years
ago, the National Library of Medicine (NLM) is firstly reported the Visible Human
dataset that is generated from male and female human cadavers. However, the VH
dataset is quite different form body of Korean people, it is not suitable
to use them for
educational purpose and research field of medical science in Korea.

In this paper, we developed the new real
-
time volume visualization technique for
huge volume dataset such as VKH, and implemented the stereoscopic
visualization

on
the CAV
E and SGI Onyx3400 system.


2.
PREVIOUS WORK

High
-
resolution

volume dataset like VKH is too huge to visualize in real
-
time. So,
many
research groups have proposed
various

volume visualization methods using 3D
graphics hardware

but most do not achieve both
of
the

reasonable

image quality and
satisfied rendering speed.

Akeley proposed the
possibility

of
accelerating

volume visualization using 3D
texture
hardware

[
2]. But the method considers only ambient light and produces only
unshaded images.

Cullip and Ne
umann proposed very fast volume rendering using 3D texture
hardware

[
3]. The method can generate 512x512 images only in 0.1 seconds on SGI Onyx Reality
Engine. But the image quality is still poor as before.

Van Gelder and Kim proposed a
method that

is not
so fast but generates
much
improved

image [4]. The method is much slower than Cullip and Neumann

s one
because the
approach reshades the volume and reloads

the
texture

map for every frame
because colors in the texture memory are view dependant.

Dachille et

al. proposed the little fast and high quality volume rendering method by
which perform shading and directional shading [5]. But it is still not enough fast for real
time application. The frame rate must be over 15 frames/sec at least for real time
applica
tions but the method can render 128x128x113 CT data into 512x512 image
window in 4.68 frames/sec only.

Ihm et al. proposed the multi
-
pass rendering algorithm based on the Phong

s
illumination model which produce higher quality image than
hardware
-
based

met
hods
[
6]. That emulates the Phong

s illumination model using combination of 2D
-

and 3D
-
texture mapping hardware, since 3D
-
texture mapping is still expensive. Repeated 2D
-
textures

mapping was much cheaper than 3D
-
texture mapping. Still that method was not
s
upportive for the satisfied speed for the real
-
time application.

3.
VOLUME VISUALIZATION

USING TEXTURE MAPPIN
G
HARDWARE

In this section, a simple description of the volume rendering algorithm

will be

given which was used for enhancing our visualization met
hod.
Ihm et al

firstly proposed
the multi
-
pass rendering algorithm. The multi
-
pass algorithm is not suitable for real
-
time applications because of the bottleneck in the first step. We modified the first step
using by image based rendering technique.


3.1.
Volume Rendering Technique based on Multi
-
pass Algorithm

The multi
-
pass algorithm is based on the Phong

s illumination
model that produces

higher quality images. The algorithm utilized graphics hardware optimally for the fast
performing of Phong

s illumina
tion model. This method consists of two steps. The first
step generates 2D normal vector images. In the second step, calculations for the
ambient
,
diffuse and specular components are performed via 2D
-
texture mapping using the 2D
image, which is generated b
y the first step as a texture map. Eqs (1) is the equation of
the Phong

s illumination model and Eqs (2) is the matrix equation for Eqs (1).



The multi
-
pass algorithm for this shading equation is
descri
bed below.


1. First Step

(a)

Generating 2D
-
texture (
N
)
by composition of the normal vector textures using
3D texture mapping hardware


2. Second Step

(a)

Generating ambient
-
diffuse reflection texture
M
ad

using color matrix function
and normal vector texture
N

(b)

Generating specular reflection texture
M
s

using color m
atrix function and
N


(c)

Generating the specular
-
reflection coefficient


texture from specular coefficient

(d)

Draw,
n

times, a rectangle mapped with
M
s

into the color buffer

(e)

Draw a rectangle mapped with


texture into the color buffer

(f)

Draw a rectangle mapped wi
th M
ad

texture

into the color buffer


The speed at
second

step is fast enough because of this is the combination of general
color matrix function and 2D
-
texture mapping, which is most graphics hardware
support the combination. But the first step takes most

of rendering time because the
combination of 3D
-
texture mapping and composition is very expensive. So this
algorithm may not suitable for real
-
time application. We proposed the new volume
visualization method that applies image based rendering to prior mu
lti
-
pass algorithm


3.
2
.
Image Based Rendering

The volume rendering techniques using texture hardware are faster than traditional
way such as ray casting and it generates quite high quality images. However, it

is
still
not
enough speed

to apply real
-
time
application
s for the huge volume datasets, and the
rendering image is poor than ray casting

s one yet. The previous multi
-
pass method has
been used 3D
-
texture

mapping and composition technique for the generation of 2D
-
normal vector texture in the first ste
p. But it took most of the rendering time because the
3D
-
texture mapping and its composition is very expensive. Here we proposed the new
method that produces normal vector texture using by image based rendering technique.

The second step can be performed t
o calculate Phong

s illumination model when the
normal vector information is available. So, we pre
-
calculated the high resolution normal
textures for the all possible viewing directions using ray casting and the proper image
for the current viewing was use
d for the normal texture. In the normal ray casting
method, the gradient may
calculate

at the point at which ray meets voxel, and the
shading was performed using the gradient as normal vector. However, the shading
algorithm was not performed and only the g
radient is kept as normal vector to generate
normal textures in this algorithm.

Th
e

general
image based rendering method has

the critical
defect

such as
regeneration of the 2D
-
texture images required if the condition of rendering factor is
altered. There a
re several rendering factors, for example, locations and colors of light
sources, materials of objects in the scene. The new method we proposed is not the way
to use final rendering image for the image based rendering, but the normal vector image
used for
the shading in second step in the multi
-
pass algorithm. By this new method
applied, the problem can be
overcome
. This method is not only able to visualize huge
volume datasets in real
-
time but also to produce very high quality images because ray
casting h
as used to generate normal texture.


4.
IMPLEMENTATION AND R
ESULTS

Both images from left to right eyes are needed for the real
-
time volume visualization
method. It has to be rendered simultaneously for the interactive and stereoscopic
visualization system.

We implemented the new multi
-
pass algorithm on SGI Onyx3400
InfiniteReality3 to visualize the huge VKH datasets on CAVE system. C/C++ and
OpenGL were used for core rendering routine, and the OpenGL Performer and
tracked

API were used for stereoscopic visu
alization, user interface and tracking. We did not
apply

the stereoscopic and
Immersive

ability, which

can be supported by some API
s
such as
CAVELib, Multipipe SDK, VR Juggler, etc. It was not a
ble

to use those APIs
because the new m
ethod
wa
s
implemented

o
n the image
-
based

rendering.


4
.
1
.
Stereoscopic Visualization

The stereoscopic images are produced when images are rendered at each of left and
right eye position of viewpoint and images are displayed simultaneously. The images
are displayed stereoscopica
lly using
particular

stereo glasses. Human beings perceive
depth as binocular disparity, which

is
the difference in the images projected onto the
back

of

the eye (and then onto the visual cortex) because the eyes are separated
horizontally by the interocul
ar distance
.

There are two frequently used methods for generation of stereoscopic images, Toe
-
in
and Off
-
axis[9]. Each camera located at left and right eye position is pointed at a single
focal point in the Toe
-
in method. This way can generate stereoscopi
c images but it
causes discomfort because of the vertical parallax. The Off
-
axis method is the way to
parallel the direction of camera. This method is more
accurate

than the Toe
-
in method.
The eyes may be more comfortable because vertical parallax is never

generated. But it
is more expensive because viewing frustum must be modified for each of the eye point.
Figure 1 shows how the focal points and viewing frustums are generated for each
method.


Figure 1.

The stereoscopic generation methods


We implemented

the both stereoscopic methods and tried to choose optimal one.
Basically, we implemented that the viewpoint is moved following
circumference

of the
scene and pre
-
calculated the normal images using ray casting. Figure 2 shows how we
generate the normal ima
ges. We experientially found the angle of left and right eye at
which the image was displayed
stereoscopically

well. We just
chose

proper textures for
optimal eye angle and used them for normal images at rendering time. For example,
when the viewing positi
on is at 3
,
if the angle


is optimal for current viewing
position
,
we can choose the 2 and 4 for the eye position. If the angle


is optimal, we can choose
1 and 5. In Toe
-
in method, this mechanism is very
simply

implemented. We created
normal images foll
owing circumference of the scene. And we don

t need to generate
each set of eyes. We can use same normal images for both eyes. But we must create
each set of eyes in Off
-
axis because the viewing frustum is different. So we must keep 2
normal image sets for

Off
-
axis method. When two methods are implemented, the
stereoscopic image was not quite
different
. So we chose the Toe
-
in Method because of
efficiency of memory. If the eye position is changed, we adjust the angle, and the
normal
images

are reloaded.



F
igure
2.

Generation of the stereoscopic image


4
.
2
.
Rendering Engine and User Interface

The rendering engine was implemented with C/C++ and OpenGL. The
size of whole
normal
vector
texture for coronal (x axis) view
of VKH with 512X512 resolution was
360 Mby
tes. Two kinds of normal vector texture, skin and bone, were generated. The
total size of texture maps is 720 Mbytes. Since 256Mbyte texture memory on Onyx3400
IR3 available, we loaded a part of the textures and continuously swapped the proper
texture when

they are needed. But, it is not recognizable because swapping time was
very short on the machine. After texture is loaded, the shading is performed with proper
two normal images using on second step in the multi
-
pass algorithm. The rendering
engine can co
ver 5 screens in CAVE, and it is implemented by multipipe function in
OpenGL Performer. The shading factors such as ambient, diffuse,
specula
r etc. can be
adjusted by user interface. We can manipulate those factors with wand that is the 6DOF
interface for
CAVE. The transparency of skin is also
changeable

by user interface. We
could achieve it only with blending function on texture hardware. This program is also
able to map 512x512 texture images. Figure 3 shows the rotation of VKH and figure 4
shows
several

functions of this program. This new multi
-
pass rendering engine uses
several 2D
-
texture mapping and composition instead of 3D
-
texture slicing and
composition. So this engine performs much faster than prior algorithms. Table 1 shows
the improvement of rend
ering performance. Because the two images must be displayed
in stereo mode, the performance was twice slower than mono mode. 3D
-
texture method
in table 1. is the prior multi
-
pass algorithm which is the 3D
-
texturing used in the first
step, and 2D
-
texture me
thod is the new multi
-
pass algorithm we applied in this study.
The method we applied was 24 times faster in speed than prior one and completely
suitable

for real
-
time applications.




Speed/Frame (sec)

Frame Rate (fps)

3D Texture Method

0.529

1.89

2D Tex
ture Method (Mono)

0.022

45.62

2D Texture Method
(Stereo)

0.044

22.81

Table 1.

Comparison of the rendering performance (Resolution: 512x512)




Figure
3.

Rotation of the VKH



Figure
4.

Several operations to VKH


5
. CONCLUSION

AND FUTURE WORK

Volume vi
sualization deals with various techniques for extracting meaningful and
visual information from
various

volume datasets. The case of the datasets is huge, it is
very difficult to render the datasets in real
-
time. Many research groups have hindered to
devel
op the fast volume visualization
method

with high quality images.

Virtual reality is the computer graphics research
field that

grows very rapidly. CAVE
is the most representative device in the field and it can generate fully imm
ersive

virtual
environment.
We proposed high in speed and quality volume visualization method for
huge VKH datasets for the real
-
time applications and development of stereoscopic
visualization system. The system is able to manipulate the rendering functions with 3D
wand device
intera
ctively

on CAVE. Figure 5.
shows the

visualization system of VKH
datasets on CAVE. This visualization method was not used
for the

3D
-
texture hardware
but the 2D
-
texture mapping and image based rendering technique for the fast speed
rendering. We applied th
e normal vector textures for
inputting

images for the image
based rendering
method
to manipulate features of light sources and materials,
dynamically.

Even if the new multi
-
pass
algorithm supports

the speed and high quality volume
visualization, this algor
ithm requires very large amount of memory. Three full
-
sets of
normal vector textures were needed for arbitrary axis rotation, and the amount of
textures is about 1 gigabyte when the
texture

size is 512X512. So, the
compression

of
normal textures may be nec
essary. When the object zoomed in large scale, the size of
the pixel of image may be increased the reason
that the

algorithm was based on pre
-
computed images.