Computer Graphics I - The Howdy Kid Online

plainspecialSoftware and s/w Development

Dec 14, 2013 (3 years and 8 months ago)

59 views

Lecture #17

Page
1

of
6

Revised December 8, 2005

COSC
4306

---

0
5
F
---

J. Rajnovich


Lecture #
17
:

Tues
day,
November 8
, 2005


Topic:

3D Viewing


Readings:

Ch. 7.1 & 7.2


In
-
Class Exercise


Consider the file ‘barn.3vn’ which contains Hill’s specification of a mesh to represent a
barn object such as the one

seen in our last lecture and in Figure 6.4 on page 290.


Draw a simple sketch on paper to show the locations in 3
-
space of the 10 vertices
specified in the file.


Describe (informally) where the barn is located and how it is oriented in the world
coordina
te system.


We now compare your answers to the image displayed by today’s OpenGL project

c
amera
demo.”


The Viewing
Volume (
Frustum
)

Revisited


At this point in the course we
need

to expand our view of the
3D worlds we generate
from a parallel
to a perspe
ctive projection.


Hill’s Figure 7.2 on page 360 depicts the situation of a camera (eye) situated at the world
origin, looking down the negative
z
-
axis with its up vector parallel to the world’s
y
-
axis.




Observe that the frustum is not a parallel
e
piped
(
generated by a parallel projection such
as
orthographic projection) but is a pyramid whose apex is at the position of the eye.


The pyramid is truncated by the
near plane

which is distance
N

(positive)
from the eye.


Lecture #17

Page
2

of
6

Revised December 8, 2005

The viewing volume is limited to


a)

th
e space between the near plane and the
far plane

which is distance
F

(positive)
from
the eye
,

and


b)

the horizontal and vertical planes (not parallel) specified by the
viewing angle

θ (in
degrees) between the top and bottom planes and by the
aspect ratio

of the width of
the near (resp. far) plane to its height.


(As we learn shortly this requires further discussion and refinement of clipping
algorithms.)


Since the aspect ratio of the

near plane is identical to the aspect ratio of the far plane, it
does not matter whether the programmer thinks of one rather the other as defining that
ratio. OpenGL programmers conventionally give preference to the near plane.


Setting the View Volume in

OpenGL


The typical sequence of statement
s

in OpenGL to set this kind of view volume is
identical to that for orthographic projections, with the exception that we
substitute

a call
to
gluPerspective()

in place of
glOrtho().


Here is the sequence to memori
ze for examination purposes.


glMatrixMode(GL_PROJECTION) ;

glLoadIdentity() ;

gluPerspective(viewAngle,aspectR
atio,N,F) ;


Note:

gluPerspective()

is actually a “wrapper” to hide the
real
work done by


glFrustum(left,right,top,bottom,N,F)


whose parameter
s specify the left, right, top, bottom, n
ear and far clipping boundaries
respectively.



This is convenient since m
any OpenGL programmers find the parameters of
glFrustum()

less intuitive than those of
gluPerspective()
.

With

the former
one has to infer the

viewing angle by taking into account the distance from the eye to the
near plane and the resulting angles of the
four
lines projected from the eye

through the
near plane corners to the far plane corners.


If the desired viewing angle is already known, it
is frustrating to have to work out what
near plane distance has to be established to achieve it.


(For further investigation of
glFrustum()

see
the Blue and Red books, and
Hill’s
discussion on pages 384
-
385 where
we learn

exactly what
values

get placed int
o the
Lecture #17

Page
3

of
6

Revised December 8, 2005

projection matrix

by these functions. This is not a “need to know” for our final
examination.)


Positioning and Pointing the Camera


For complete control of 3D viewing we need to be able to position and orient the camera
anywhere we like in the world

model. The “eye” of the viewer is always at the apex of
the resulting viewing pyramid.


(In fact, as we see in this week’s demonstration project, we want to be able to do this
dynamically in real time;
i.e.

to be
able to “fly” through the scene
.)


To do t
his
requires that we translate the position of the eye from its default location at the
world origin and that we use rotation transformations to point the camera in the desired
direction of view and
to
orient it to have whatever “up” vector is required.

Th
is is done
entirely by the modelview matrix.


The OpenGL code sequence to do this is identical to that used for a parallel projection
camera since the difference between parallel and perspective projection lies
entirely
in
the projection matrix.


Here is t
he OpenGL code sequence to memorize for Examination purposes.


glMatrixMode(GL_MODELVIEW) ;

glLoadIdentity() ;

gluLookAt(eye.x,eye.y,eye.z,



look.x,look.y,look.z,


up.x,up.y,up.z) ;


What does
gluLookAt()

actually do?


In simplest terms i
t creates the viewing matrix
V
.


Recall that the world transformation matrix (a
.
k
.
a model matrix) is denoted
M

by Hill,
and that the composite

modelview


matrix is denoted
VM

by Hill.


What values are actually placed in the viewing matrix
V

by
gluLookAt()
?


To answer this question we first have to place ourselves at the point of view of the eye;
i.e.

the camera viewer. Hill’s Figure 7.3 on page 361 depicts the situation.


Lecture #17

Page
4

of
6

Revised December 8, 2005



Observe that a
lthough the camera is positioned somewhere in the world specified by

x
,
y
and
z

coordinates, its point of view has its own coordinate system with its origin at the
position of the eye and three orthogonal axes usually designated
u
,
v

and
n
.


The
u

vector points off to the right of the viewer while the
v

vector
(known as t
he
up
vector
)
defines “up” for the viewer. The
n

vector defi
nes the direction of the look
.

N.B.

The eye
always
looks down
its

n
-
axis towards
the

negative direction.


As the camera moves and/or changes orientation, the
u
,
v

and
n

axes remain fixed from
the
point of view of the eye but change with respect to the world coordinate system
defined by the
x
,
y

and
z

axes.


This means that the job of
gluLookAt()

is to create a transformation matrix to map
world coordinates to camera coordinates. The composite matri
x
VM

then ensures that
wh
at appears to the viewer is only

the world scene that

fits inside the current viewing
volume
, always a function of where the camera is, where it is pointing and how it is
oriented.


Here is the view matrix
V

generated by
gluLookAt(
).


V

=


Observe
:


a)

that in this notation
eye

is a point

the locat
ion of the viewer in the world

while
(
eye
-
(0,0,0) is a vector
pointing from
the world origin to the viewer
.



b)

that the values in column four are scalars based on a dot

product.


(In the next lecture we will see how to generate this matrix, without calling
gluLookAt()
, by more primitive OpenGL calls.)


For examination purposes please memorize this matrix.

Lecture #17

Page
5

of
6

Revised December 8, 2005


To assure yourself t
hat this matrix is correct try some

m
atrix mu
ltiplication
s.


For example, the world coordinates of
point
eye

should be mapped to the origin of the
viewing coordinate system.


V

=


Likewise,
multiplying against the
V

matrix,
the
u

vector
(a world vector)
should map to
the canonical unit
i

vector of

the viewing coordinate system,
the
v

vector to
its

j

vector
and the
n

vector to
its

k

vector.


Here is what you should get with
u
.


V

=


Please confirm the other tw
o cases; namely, for
v

and for
n
.


Today’s Camera Demo


Today we
experiment

with
my

camera demo program to
inspect

scenes generated by
Hill’s sample mesh file objects.


The value of being able
both
to move and to reorient the camera becomes obvious.


What

also becomes obvious is how limited
my

keyboard interface is.

(I encourage any
improvements

or alternatives that you might like to contribute.)


Challenge:

t
ry to simulate the continuous line of flight as a pilot circles to land at an
airport

moving forwa
rd, pitching do
wnward, descending, yawing and rolling
all at the
same time!


Six Degrees of Freedom


With my own background in aviation, I find it
useful

that Hill chooses an aeronautical
analogy to help explain camera behav
i
our in 3D graphics.


Lecture #17

Page
6

of
6

Revised December 8, 2005

Using the
analogy of a pilot (the eye) in an air
craft

(
or spacecraft

or submersible
watercraft)
, it is
evident

that there are six degree
s of freedom in 3
-
space viewing, three
each based on translations and on rotations.


Camera Translation


1.

forward
-
backward

2.

up
-
down

3.

left
-
right


Camera Rotation


4.

Pitch up
-
down

5.

Yaw left
-
right

6.

Roll left
-
right


(Actually, I
have never flown

an aircraft which could do anything but translate directly
forward in flight. Flight backward, straight up or down and straight right or left
is

imposs
ible

with fixed
-
wing aircraft
. But your imagination,
helicopters and 3D graphics
can do all of these!)


Finding a useful user interface which permits full exploitation of all six degrees of
freedom
is an on
-
going research problem in human
-
machine interfaci
ng.


Is it any wonder that game consoles do not depend on keyboard interfaces?