Robotics - People

oregontrimmingAI and Robotics

Nov 2, 2013 (4 years and 1 month ago)

79 views

Robotics

CSPP 56553

Artificial Intelligence

March 10, 2004

Roadmap


Robotics is AI
-
complete


Integration of many AI techniques


Classic AI


Search in configuration space


(Ultra) Modern AI


Subsumption architecture


Multi
-
level control


Conclusion

Mobile Robots

Robotics is AI
-
complete


Robotics integrates many AI tasks


Perception


Vision, sound, haptics


Reasoning


Search, route planning, action planning


Learning


Recognition of objects/locations


Exploration

Sensors and Effectors


Robotics interact with real world


Need direct sensing for


Distance to objects


range finding/sonar/GPS


Recognize objects


vision


Self
-
sensing


proprioception: pose/position


Need effectors to


Move self in world: locomotion: wheels, legs


Move other things in world: manipulators


Joints, arms: Complex many degrees of freedom

Real World Complexity


Real world is hardest environment


Partially observable, multiagent, stochastic


Problems:


Localization and mapping


Where things are


What routes are possible


Where robot is


Sensors may be noisy; Effectors are imperfect


Don’t necessarily go where intend


Solved in probabilistic framework

Navigation

Application: Configuration Space


Problem: Robot navigation


Move robot between two objects without
changing orientation


Possible?


Complex search space: boundary tests, etc


Configuration Space


Basic problem: infinite states! Convert to finite
state space.


Cell decomposition:


divide up space into simple cells, each of which can be
traversed “easily" (e.g., convex)


Skeletonization:


Identify finite number of easily connected points/lines
that form a graph such that any two points are
connected by a path on the graph

Skeletonization Example


First step: Problem transformation


Model robot as point


Model obstacles by
combining
their perimeter
+ path of robot around it


“Configuration Space”: simpler search

Navigation

Navigation

Navigation as Simple Search


Replace funny robot shape in field of funny
shaped obstacles with


Point robot in field of configuration shapes


All movement is:


Start to vertex, vertex to vertex, or vertex to goal


Search: Start, vertices, goal, & connections


A* search yields efficient least cost path

Online Search


Offline search:


Think a lot, then act once


Online search:


Think a little, act, look, think,..


Necessary for exploration, (semi)dynamic env


Components: Actions, step
-
cost, goal test


Compare cost to optimal if env known



Competitive ratio (possibly infinite)

Online Search Agents


Exploration:


Perform action in state
-
> record result


Search locally


Why? DFS? BFS?


Backtracking requires reversibility


Strategy: Hill
-
climb


Use memory: if stuck, try apparent best neighbor


Unexplored state: assume closest


Encourages exploration

Acting without Modeling


Goal: Move through terrain


Problem I: Don’t know what terrain is like


No model!


E.g. rover on Mars


Problem II: Motion planning is complex


Too hard to model


Solution: Reactive control

Reactive Control Example


Hexapod robot in rough terrain



Sensors inadequate for full path planning



2 DOF*6 legs: kinematics, plan intractable

Model
-
free Direct Control


No environmental model


Control law:


Each leg cycles: on ground; in air


Coordinate so that 3 legs on ground (opposing)


Retain balance


Simple, works on flat terrain


Handling Rugged Terrain


Problem: Obstacles


Block leg’s forward motion


Solution: Add control rule


If blocked, lift higher and repeat


Implementable as FSM


Reflex agent with state

FSM Reflex Controller

S2

S1

S3

S4

Push back

Lift up

Stuck?

Move

Forward

Retract, lift


higher

no

yes

Set

Down

Emergent Behavior


Reactive controller walks robustly


Model
-
free; no search/planning


Depends on feedback from the environment


Behavior emerges from interaction


Simple software + complex environment


Controller can be learned


Reinforcement learning

Subsumption Architecture


Assembles reactive controllers from FSMs


Test and condition on sensor variables


Arcs tagged with messages; sent when traversed


Messages go to effectors or other FSMs


Clocks control time to traverse arc
-

AFSM


E.g. previous example


Reacts to contingencies between robot and env


Synchronize, merge outputs from AFSMs

Subsumption Architecture


Composing controllers from composition of
AFSM


Bottom up design


Single to multiple legs, to obstacle avoidance


Avoids complexity and brittleness


No need to model drift, sensor error, effector error


No need to model full motion

Subsumption Problems


Relies on raw sensor data


Sensitive to failure, limited integration


Typically restricted to local tasks


Hard to change task


Emergent behavior


not specified plan


Hard to understand


Interactions of multiple AFSMs complex


Solution


Hybrid approach



Integrates classic and modern AI


3 layer architecture


Base reactive layer: low
-
level control


Fast sensor action loop


Executive (glue) layer


Sequence actions for reactive layer


Deliberate layer


Generates global solutions to complex tasks with planning


Model based: pre
-
coded and/or learned


Slower


Some variant appears in most modern robots

Conclusion


Robotics as AI microcosm


Back to PEAS model


Performance measure, environment, actuators, sensors


Robots as agents act in full complex real world


Tasks, rely on actuators and sensing of environment


Exploits perceptions, learning, and reasoning


Integrates classic AI search, representation with
modern learning, robustness, real
-
world focus