Manipulating Volume Rendered Images Using Intuitive Gestures

moanafternoonInternet και Εφαρμογές Web

11 Δεκ 2013 (πριν από 3 χρόνια και 7 μήνες)

77 εμφανίσεις




Manipulating Volume
Rendered Images Using

Intuitive Gestures

Requirements Document

Richard Paris

Richard.a.paris@vanderbilt.edu


I pledge my honor that I have neither given nor received aid on this assignment.

Mission Statement


Once implemented this system will allow the user to have control
over various aspects
of

volume rendering within a virtual environment using intuitive gestural interfaces. This will
increase the user’s ability to analyze and plan surgeries and give patients the ability to see what
exactly the surgery will entail.

This system will also allow f
or quicker and simpler research to be
performed as a result of the interface that will be created.




I pledge my honor that I have neither given nor received aid on this assignment.

Background Information


In volume rendering there are seven key aspects that must be addressed; the first being
the color transfer function which determine
s how the any voxel will be colored. The second
aspect is the opacity transfer function which determines the opacity of a given voxel. The next
three are scale, rotation, and translation of the object; these are important to consider because
humans natural
ly want to pick up move and examine an object.

Next is the ability to view 2D
slices of the volume which allows for a greater amount of detail to be shown. Finally
segmentation is the process of splitting up the volume into the discrete objects.




I pledge my honor that I have neither given nor received aid on this assignment.

Users

an
d Stakeholders


Researchers

This group will consist of Dr. Landman
, Dr. Bodenheimer

and his graduate
students but may be extended to other medical researchers. The motivation of this
group is to gain a better understanding of the data that is being visuali
zed

and will use
the system often. These users can be considered expert users given their extensive
knowledge of volume rendering and computing, additionally these users have a
familiarity with virtual environments as it relates to visualization of medical

data.
Typically this user will use the system with only soft time constraints; additionally these
constraints will be in the days rather minutes or hours. For this group the system will be
used in a professional setting, these users are paid researchers w
ho are working
together in a collaborative environment to better understand the data. The actual use
will take place mainly within
a lab or office environment, this is a result of the current
limitations of virtual environments.

Dr. Landman and Dr.
Bodenheimer are the
stakeholders of the project while the graduate students are the primary users.


Medical Students

The second group will be medical students who will use the system as a means
of learning about surgical procedures and understanding the or
gan or system of organs
being viewed. These users can be mainly classified as beginners as they have little to no
experience with volume rendering or virtual environments in general.

For users of this
type the task nature can be considered very calm, they
will be given guidance and will
use the system for a given amount of time regardless of how much interaction is
completed.

The system will be used in a teaching lab with instruction from an expert

I pledge my honor that I have neither given nor received aid on this assignment.

user, this should be considered a work environment and be l
ess collaborative and more
instructional.
This group consists of secondary users.


Surgeons

This group will consist of medical professionals, specifically surgeons, who will
use the system as a means to plan surgeries and other medical procedures.

This
user
group will have varying degrees of frequency of user because it will not be required in all
surgical procedures. This user represents the intermediate user, surgeons typically have
some knowledge of volume rendering and most certainly have a great dea
l of
knowledge related to the human body, but likely have no experience with virtual
environments or motion capture. The surgeon will be able to use this system to plan
surgeries which results in the task being time critical depending on the severity of th
e
operation
. The surgeon can use this system in a lab or office provided the equipment
can be set up in the room. This can be collaborative, either with fellow and subordinate
surgeons or with the patient or patient’s family. This can also be and individua
l exercise
as for planning rather than demonstrating the plan as it would be in a collaborative
setting.

This group consists of secondary users.




I pledge my honor that I have neither given nor received aid on this assignment.

Inputs


Medical Data

The first input to the system will be the medical data to be visualized, the user
should

be able to preselect data to load into the program before entering the virtual
environment. This data will be the primary interaction point for the user, i.e. the user
will work mainly with this data under the scope of this system.
At this time the data w
ill
be soft coded (determined at compile time) into the system for ease of development
and testing.


Gestures

The second input will be a set of gestures that will be developed, for this system,
at a later date. These gestures will be distinct from one anot
her and will encompass
nearly all of the interaction with the medical data.

This input will be collected using one
of three systems, the 5DT data glove, the Cyber Glove, or the Leap Motion. The former
two device are gloves that the user wears and the tensi
on of the gloves can be
measured to determine the relative positioning of the finger joints.

The latter device
tracks up to 10 pointing devices (which includes fingers) within a certain space and gives
those locations to the system.


Hand Position

and Orie
ntation

The next input is the position

and orientation

of each hand and will be used
primarily in determining the context of each gesture but will also be used in conjunction
with certain gestures to make the interaction more fluid. This will be the remainder of
the interaction with the medical data.

The posi
tion of the hand will use an array of

I pledge my honor that I have neither given nor received aid on this assignment.

motion capture cameras and two wrist bands (one for each wrist) that have motion
tracking spheres attached.


Head Position and Orientation

The final input will be the head position and orientation which will be used to

determine the camera position and orientation, this input will not have an effect on the
interaction with the
medical
data.

This input will be captured using a precise precision
tracking mechanism that utilized
four
motion capture
cameras
.

This will be mo
unted on
the head mounted display and calibrated to track the device.




I pledge my honor that I have neither given nor received aid on this assignment.

Outputs


Rendering

The only output of the system is the rendered data and tools to be used within
the system. These tools may be widgets, virtual objects, and/or simply the user’s hand
s
and will be rendered so that the user can interact with the data.

This 3D rendering will
be sent to the head mounted display for viewing by the user.




I pledge my honor that I have neither given nor received aid on this assignment.

Constraints


Computational

The system will be run using 3 computers, the first will take care of hand
position
tracking. The next will be responsible for tracking the position and orientation of the
head. The final computer will be responsible for rendering the environment. Therefore
the system requires three computers each capable of performing the outlin
ed tasks as
well as a head mounted display, a motion capture system, and one of the

5DT glove,
Cyber Glove, or Leap Motion.


Physical

The system will be confined to a lab or office environment because it requires a
great deal of computational power as well

as three computers

to run the system. The
lab will need to be open and have a large area to walk around in, as required by the
motion capture systems. Finally the area will need to be closed off from various outside
effects such as light which can cause e
rrors in the motion capture system.

The virtual
environment can only be a certain size, and this size is based on the physical dimensions
of the room
.




I pledge my honor that I have neither given nor received aid on this assignment.

Requirements


Functional Requirements



#01 Color Transfer Function Control

The
Rationale

for this requ
irement is that the user needs to be able to
control the coloring of the
objects for the purposes of visualization and this is
done using the color transfer function.

The
Description

of this requirement is that the user needs to be able to
create a piecewi
se linear function

from one variable to three

within the system
which represents the color transfer function.


The
Validation

for this requirement is to see if a semi
-
experienced used
can create a transfer function.

#02 Opacity Transfer Function Control

Th
e
Rationale

for this requirement is
that the user needs to be able to
control the opacity of the objects for the purposes of visualization and this is
done using the opacity transfer function.

The
Description

of this requirement is
that the user needs to b
e able to
create a piecewise linear within the system that represents the opacity transfer
function.

The
Validation

for this requirement is to see if a semi
-
experienced used
can create a transfer function.


I pledge my honor that I have neither given nor received aid on this assignment.

#03 Object Scale Control

The
Rationale

for this requirement is
that the user needs to be able to
expand and contract the volume rendered object to allow for increased detail or
a higher level view of the object.

The
Description

of this requirement is
that the user needs be able to
scale the ob
ject interactively while maintaining proper spatial relationships.


The
Validation

for this requirement is to have a user scale an object to a
set of sizes and ensure that all of these sizes are possible

#04

Rotation

Control

The
Rationale

for this requirem
ent is
that it will be easier to rotate the
object rather than walk around it.

The
Description

of this requirement is
that the user needs to be able to
rotate the object along any given axis which will be indicated by a certain
gesture.

The
Validation

for
this requirement is

to test and ensure that various
axes of rotation can be created and that the user can rotate the object to a given
set of orientations.

#05

Translation Control

The
Rationale

for this requirement is
that it is easier to move the object
around in the space than to walk to the object.

The
Description

of this requirement is that the user needs to be able to
move the object to any 3D position within the room.


I pledge my honor that I have neither given nor received aid on this assignment.

The
Validation

for this requirement is

to have the user move the object
along vario
us directions and have the user move the object to a set of locations.

#06

Object Occlusion

Control

The
Rationale

for this requirement is
that certain objects do not always
contain the most pertinent information and it would be useful to be able to look
behind the less important objects.

The
Description

of this requirement is
that the user needs to be able to
hide any object in the data. The user then needs to be able to unhide any of
those objects

The
Validation

for this requirement is

to have the user h
ide every object
in the system in a given order, then unhide them in a different (not reverse)
order.

#07

Cutting Plane Control

The
Rationale

for this requirement is
that viewing slices of data can give a
better more traditional look at the specific object
, it can also be easier to
interpret a 2D image in certain circumstances.

The
Description

of this requirement is
that the user should be able to
create a cutting plane along any of the three Cartesian axes.

The
Validation

for this requirement is

for the us
er to create a cutting
plane in multiple spots along these three axes.



I pledge my honor that I have neither given nor received aid on this assignment.

Non Functional
Requirements

#08

Fluidity of Visualization

The
Rationale

for this requirement is
that if the frames per second is too
low the experience diminishes and the user no
longer feels as if the system is
interactive.

The
Description

of this requirement is
that the rendering must achieve at
least 10 frames per second.

The
Validation

for this requirement is

to track the frames per second and
ensure that during testing the val
ue stays above 10 for at least 90% of the
interactions.

#09 Robustness

The
Rationale

for this requirement is
that a similar system would be of
benefit for visualizing a

number of different data sets such as brain, heart, lungs
etc. rather than just the abd
ominal data for this project.

The
Description

of this requirement is
that the system must be
transferable to the other data sets.

The
Validation

for this requirement is

to use these other data sets and
ensure the requirements still hold true.

#10 Input
Accuracy

The
Rationale

for this requirement is
that precise and accurate gestures
are necessary to control the functions of the system.


I pledge my honor that I have neither given nor received aid on this assignment.

The
Description

of this requirement is
that the system should be able to
routinely identify the gesture being performed
as it is being performed.

The
Validation

for this requirement is

to display the gesture being
performed as soon as it is identified
and determine if the gesture identified is the
same as intended by the user.


User Requirements

#1
1
Medical Background Knowl
edge

The
Rationale

for this requirement is
that users need some medical
background in order to understand the system.

The
Description

of this requirement is
users must have at least some
domain knowledge.

This requirement has no

Validation

it is up to the
user to decide this.


Usability Requirements



#1
2

Gesture Based Interface

The
Rationale

for this requirement is
that the system is based on using
gestures as input so all actions should be gesture based rather than menu based
.

The
Description

of this req
uirement is
that all actions should be gesture
based rather than menu base
d
.

The
Validation

for this requirement is

that
all of the functional
requirements can be achieved without the use of a menu or touch based system.


I pledge my honor that I have neither given nor received aid on this assignment.

#13
Gesture Response
Rate

The
Rationale

for this requirement is
that the system should quickly
respond to gestures to ensure that the user is able to control the object as the
interaction is occurring.

The
Description

of this requirement is
the gesture must be determined
within 100 mil
liseconds.

The
Validation

for

this requirement is to display the gesture being
performed as soon as it is identified and see if a typical user can notice any
undue delay between the action and the response.












I pledge my honor that I have neither given nor received aid on this assignment.

Schedule

Determine what
the
Leap Motion

can
do
-

September 18th

Determine what the users need to be able to do
-

September 19th

Design gestures for each
-

September

24th

Implement gestures for cyberglove
-

September 26th

Implement gestures for 5DT glove
-

September 28th

Implement gestures for l
eap motion
-

September 30th

Develop virtual environment
-

October 3rd

Design criteria for evaluation
-

October 5th

Design survey/form to test criteria
-

October 15th

Perform user evaluation
-

October 16th
-

November 5th

Interpret data to decide on the best

system
-

November 12th

Deliver completed project
-

November 19th