D. Hunter Hale Lecture 13

molassesitalianΤεχνίτη Νοημοσύνη και Ρομποτική

6 Νοε 2013 (πριν από 4 χρόνια και 6 μέρες)

77 εμφανίσεις

D. Hunter Hale

Lecture 13

Admin


Mid
-
Term Next Tuesday


Review Sheet by Tomorrow at 11:59


Homework 3


Questions/Comments?


Remember the expected unforeseen
circumstances shifting when I collect them


Graduate Student Fun


Each graduate student will do a 15 min
presentation based on state of the art techniques
in game engines


Midterm Course Status


Official Course Goals


Graphics Rendering Pipeline (transformations, lighting
,shading); 2D/3D Texture Mapping; Image Based
Rendering; Spatial Data Structures and Acceleration
Algorithms; Level of Detail; Collision Detection, Culling
and Intersection Methods; Vertex/Pixel
Shaders
; Pipeline
Optimization; Rendering Hardware.


Additional Course Material


Game Engine Architecture, Advanced
Shader

Programming, HUD & HCI design, Data logging,
Advanced Lighting and Shadows, Non
-
Photo Realistic
and Stylistic Rendering, Timing and Render Loop
Management,
Machina
, Camera Management




Official Course Goals


Graphics Rendering Pipeline (transformations, lighting
,shading); 2D/3D Texture Mapping
;
Image Based
Rendering
;
Spatial Data Structures and Acceleration
Algorithms
;
Level of Detail
; Collision Detection,
Culling
and Intersection Methods
;
Vertex/Pixel
Shaders
;
Pipeline
Optimization
; Rendering Hardware.


Additional Course Material


Game Engine Architecture
, Advanced
Shader

Programming,
HUD & HCI design
,
Data logging
,
Advanced Lighting and Shadows, Non
-
Photo Realistic
and Stylistic Rendering,
Timing and Render Loop
Management
,
Machina
, Camera Management



Rendering Spectrum


Normally we render using Triangles
however other techniques exist


There are many ways to use pre
-
rendered
textures and images to fake geometry

Fixed
-
View Effects


Suppose that you know the view position for
every scene in your game


Using this information:


Render you world in advance and high quality


Display this texture at runtime


Done?


Render a depth map along with your world


Render dynamic scene objects by comparing with
this depth map and rendering behind/in front of
the background image


Light Field Rendering


Light Field Rendering is the science of
capturing a single object from many
viewpoints


Can be used in engines to fake small
complex objects


Find the two closest viewpoints to the current
camera position on the object and interpolate


Why only small objects complex objects?

Sprites and Layers


One of the oldest techniques in
rendering


A sprite is a texture image that is drawn
directly to the screen (i.e., a mouse
curser)


Layers of sprites can be built up to
represent complex looking scenes using
just textures


Think image layers in
photoshop

Billboarding


Allows for the placement of arbitrary
textures into space


Generally constructed from three
components


Up Vector


Surface Normal


Anchor Location


Using these three things we

we build translation and

rotation matrix to place the

billboard . The rotation matrix

is constant for all billboards

Screen
-
Aligned Billboard


Similar to sprites


the billboard is always
parrallel

to the screen


up vector is constant


Generate a surface normal by taking the
negation of the view planes normal


Up vector is taken directly from the camera


Generally used to display text or other fixed
images in the game world hence the name
“billboard”

World
-
Oriented


Unlike Screen
-
Aligned these billboards
display objects present in the game
world


Therefore use the worlds up vector,
while still using the cameras view plane
to find the normal


Problems for World
-
Oriented
Billboards


Using the same normal

can result in distortion

(due to
pMatrix
)


Instead calculate each

billboards normal

direction separately

using the center of

the billboard

and the viewpoint

Billboards for Clouds/Weather


In general billboards work well to show
atmospheric effects such as clouds or
billowing dust


Problems with Billboard Clouds


One common problem with billboard cloud
effects occur when solid objects intersect
them


The
zbuffer

doesn’t perform proper clipping


Instead fade

out the clouds

when they

are near

objects

Axial Billboards


Instead of directly facing the view these
billboards are attached to a fixed world
space axis


The billboard

faces the view

as best they can

along this axis


Commonly used

for cylindrical

objects and

death rays

Particle Systems


Particle systems represent a set of small
objects that independently move according to
some algorithm


Each object is represented as a single textured
quad or a colored point


Common applications for particles are fire,
smoke, explosions, water flows, trees.

Modern Particle Systems


Historically particle systems were executed
exclusively on the
cpu


Now modern particle systems can exist

be maintained entirely

on the GPU. Particles

are birthed using the

geometry
shader

animated on the

vertex
shader
, and

drawn with a fragment

shader

Billboard Clouds


Not to be confused with using billboards
for clouds a Billboard Cloud is
something entirely different


In this technique instead of using one
image to represent an object you can
procedurally reduce an object down to
dozens of images


This allows for a dramatic reduction in
the number of polygons to be rendered
but still yields objects that look “real”

Billboard Cloud Example

Polygon Tree:

20,610 triangles

Polygon Tree

Wire
-
Frame

Billboard Cloud Tree

Billboard Cloud

Wireframe

156 triangles


Displacement Techniques


These techniques allow a sprite image
to store depth information for each pixel
(depth sprites or
nailbloards
)


This allows each

pixel of a sprite

to adjust its

zbuffer

location


Image Post Processing


Effectively the modern pixel
shaders

present in GPU are extremely effective
image processing units


By rendering a 3D scene twice (one to a
texture and then again a full screen
image) any number image processing
algorithm can be applied to it.

Color Correction/Tone Mapping


Normally use 2 or more sets of textures to
represent different lighting tones or modes in a
environment


Instead apply a transform lookup function


This function

is generally a

1D texture

enter a color

value pull the

resulting color

High Dynamic Range Imaging


The intensity of light in a real scene can vary
greatly (1000
-
10000X difference in luminance)


Unfortunately light data is generally stored as
4 bytes. This limits the different to in lighting
in a scene to 0
-
255.


Solution: Render the scene once for each
group of radically different illuminated objects
then composite the final image together

Lens Flare and Bloom


Lens Flares are caused by imperfections
in the camera lens and the internal
geometry of the camera


These rarely appear in photograph since
the introduction of better cameras


However, they are almost expected in

games and easy to create

Lens Flare Implementation


Draw colored billboards over the areas
expected to have a lens flare, the color
of these billboards determines the flare
color


Blend these boxes with a series of flare
mask textures

Depth of Field


The in photography there is a concept of
focal depth where objects outside this
depth appear blurry


This can be used to highlight or draw
attention to certain areas of the screen


This can be coded

by using the depth

buffer to determine

if a pixel should be

blurred when it is

drawn


Motion Blur


Initially caused by the sampling rates of
camera. The camera shutter is open for
a finite amount of time and the objects it
is viewing are moving during that time


Conversely when rendering a scene the
“camera” we use is instantly open and
closed. We are effectively instantly
“sampling” our scene

Two Motion Blur Implementations


Accumulate multiple frames
and blend between them to
produce a ghost movement
effect


Encode the direction of
movement into a render
buffer and blur moving
objects based on their
direction

Advanced Fog


As we discussed previously fog is just a
simple matter of comparing the depth
value of a pixel to a fog equation

But this produces fog that is always flat to view and can cause artifacts if the user

rotates their view.

Volume Rendering


In addition to 2D images textures can store
3D spatial data. Effectively this lets a texture
represent some volume of space (
voxel
)


This data can be accessed by casting rays
through the
Voxel

data on the pixel
shader

(limited
raycasting
). Using

this environmental or

physicals data this encoded

into a
voxel

can be

presented in real time.

(e.g., fire, smoke, liquids)

Fur and Hair


Using modern graphics hardware (DirectX 11,
OpenGL 4) it is possible to physically simulate
hair and fur at runtime.


The color length and angle of the fur is
encoded onto a surface. Then the geometry

shader

reads

this information

and adds lines

to the world that

represent this

new geometry