Chapter 25: Robotics

oregontrimmingAI and Robotics

Nov 2, 2013 (3 years and 5 months ago)

53 views

Chapter 25: Robotics

April 27, 2004

The Week Ahead …


Wednesday: Dmitrii Zagorodnov



Thursday: Jeff Elser’s presentation,
general discussion



Friday: Rafal Angryk



Monday: CS 536 final @ 2 p.m.

25.1 Introduction


Robot Components:


Sensors


Effectors


Processors


Robot Types:


Manipulators (> 1 million worldwide)


Mobile (ULV and planetary, UAV, AUV)


Hybrid


Other (prosthetic devices, multibody systems)


Typical Environments


Partially Observable


Stochastic


Dynamic


Continuous

25.2 Robot Hardware


Sensors


passive (e.g. camera)


active (e.g. sonar, laser, radar)



Record distances, Figure 25.2


Record images


Record properties of robot
(propriocaptive), e.g. inertial sensors

Effectors


Degrees of Freedom (DOF), e.g. a wrist
has 3 DOF



A car has 2 controllable DOF but 3
effective DOF



A non
-
holonomic robot has a higher
effective DOF than controllable DOF


Effectors


Most robot arms are holonomic (simpler)


Most mobile robots are non
-
holonomic



Prismatic joints allow sliding motion


Revolute joints allow rotational motion



Dynamic stability vs. Static stability


Power Sources: electric motor, pneumatic
actuator, hydraulic actuator

25.7 Robotic Software Architecture


Subsumption Architecture, Rodney
Brooks, 1986


Application: wall following


a framework to assemble reactive (as
opposed to deliberative) controllers out of
FSAs.


Figure 25.22


Difficult to understand


Difficult to change behavior (wasp)

Three Layer Architecture


Very common today



Reactive Layer (sense
-
act loop)


Executive Layer


Deliberative Layer

Robotic Programming Languages


General Robot Language, GRL, 2000


function


uses FSMs as building blocks


provides communication and control
constructs



C++ Embedded Systems, CES, 2000


integrates probability and learning

Robotic Programming Languages


Reactive Action Plan System, RAPS, 1994


can specify goals, plans, conditions for likely
plan success



ALisp, 2002


can program non
-
deterministic choice points


learns via reinforcement learning

25.8 Application Domains


Industry


Agriculture


Transportation, Figure 25.23, the challenge is to
use natural cues to locate robot


Hazardous Environments


Exploration, Figure 25.24


Health Care, Figure 25.23


Personal Service


Entertainment, Figure 25.4b


Human Augmentation

25.4 Planning to Move


Assume


motions are deterministic


localization is exact



Point to point motion


Compliant motion



Configuration space includes location,
orientation, joint angles

Path Planning


Involves continuous spaces



Two common techniques that map the
continuous space onto a discrete space


cell decomposition


skeletonization

Configuration Space


A workspace representation is easier. For
example, in Figure 25.12(a) everything
can be specified by (x
e
, y
e
) and (x
s
, y
s
)


The problem is that not all points are
realizable

Configuration Space


Use (

e
,

s
), the angles of the joints



Kinematics: Maps a configuration space onto a
workspace (easy)


Inverse Kinematics: Maps a workspace onto a
configuration space



Obstacles, Figure 25.12b



Free Space vs. Occupied Space, Figure 25.13


Cell Decomposition


Figure 25.14


Each region can be solved simply



Rectangles


hard for high dimensions


mixed cells are challenging (don’t want unsound
solutions or incomplete problem solving ability)


Irregular Shapes



Potential Field, Figure 25.15

Skeletonization


Reduce the robot’s free space to 1
-
D



Voronoi Graphs, Figure 25.16a


Map the initial point onto the Voronoi Graph


Follow Voronoi Graph


Map point on Voronoi Graph onto goal point


Probabilistic Roadmaps, Figure 25.16b


Offers more routes than Voronoi Graphs

Exercise 25.8


Humans are so adept at basic tasks such
as picking up cups or stacking blocks that
they often forget how complex these tasks
are. In this exercise, you will discover the
complexity and recapitulate the last 30
years of developments in robotics. First,
pick a task, such as building an arch out of
three blocks. Then, build a robot out of
four humans as follows:

Exercise 25.8


Brain. The job of the Brain is to come up with a
plan to achieve the goal and to direct the hands
in the execution of the plan. The Brain receives
input from the Eyes, but cannot see the scene
directly. The brain is the only one who knows
what the goals is.


Eyes. The Eyes’ job is to report a brief
description of the scene to the Brain. The Eyes
should stand a few feet away from the working
environment, and can provide qualitative
descriptions or quantitative descriptions. Eyes
can also answer questions from the Brain.


Exercise 25.8


Left Hand and Right Hand. One person plays each
Hand. The two Hands stand next to each other; the Left
Hand uses only his or her left hand, and the Right Hand
only his or her right hand. The Hands execute only
simple commands from the Brain


for example, “Left
Hand, move two inches forward.” They cannot execute
commands other than motions; for example, “Pick up the
box” is not something a Hand can do. The Hands must
be blindfolded. The only sensory capability they have is
the ability to tell when their path is blocked by an
immovable obstacle such as a table or the other Hand.
In such cases, they can beep to inform the Brain of the
difficulty.