Automatic Catching Machine with Binocular Vision

lynxherringAI and Robotics

Oct 18, 2013 (3 years and 8 months ago)

296 views



A
utomatic Catching Machine
with Binocular V
ision

ECE 445: Senior Design


Project by:
James Keiller, Rene A. Silva & Preeti Vasudevan



TA: Kieran Levin



Abstract: A robot equipped with binocular vision capable of identifying a ball thrown through t
he air,
track its position, and moving to intercept and catch the ball. The horizontal movement will be handled by
a wheeled base, and the catching unit shall be mounted on a vertical rail.


Introduction:

Title:

Automatic Catching Machine with Binocula
r Vision

With the dawning of the “robotic age,” the need of providing robots with the tools they need to
better understand their surroundings has lead us to this experiment in binocular vision. It is
binocular disparity that allows humans to determine the

depth of objects in their field of vision
without having the advantage of son
a
r built into our biological systems.

The “state of the art” of
machine vision is such that robots that work with materials use two or more cameras to gain a
three
-
dimensional u
nderstanding of items in their working space and they are able to manipulate
them to some degree. The cameras are always at known positions and the robot has the
advantage of a secure environment. There have been experiments with true binocular vision, b
ut
oftentimes the cameras are facing forward at all times. This has the advantage of simplifying the
mathematics needed to work out an object’s position relative to the robot, but limits the speed at
which the machine can track objects.

Our plan is to bui
ld a system with binocular vision that takes advantage of the alignment of the
“eyes” to track objects. We will demonstrate our system by showing how the robot can
determine the x
-
y
-
z position of a
tennis

ball relative to itself
;

that it can follow it aro
und in real
-
time, and that it can respond to changes in position rapidly enough to catch it when the ball is
thrown near the machine. It is our hope that the techniques we will develop will be used in the
future. For example, a robot exploring a new envir
onment could differentiate objects it had
never seen before from the background by traveling round them and continually taking note of
the position of their features in x
-
y
-
z space until it has “learned” about the object.


Objectives
:

The primary objective

of the project is to build a system that uses visual data to track a ball

in
space
. The system then uses
image data to attempt to aim its cameras at its target of interest
while using a working approximation of the objects relative position
to move sidewa
ys

to
intercept. It will align its vertical arm and position itself so accurately that it will be able to catch
the ball in a hoop attached to its arm.

Features
:



Binocular
V
ision
:
The robot will use two cameras to provide binocular vision. Each
camera wi
ll be mounted on servos that will allow it to rotate vertically as well as
horizontally. The position of the servos relayed back to the FPGA will allow the system
to know exactly where the two lines of sight are crossing in x
-
y
-
z space. This system
shall

allow the robot to determine the position of objects in three
-
dimensional space in
much the same way that the alignment of our human eyes allow
s

our brain
s

to determine
the same.





Motion
T
racking
: The FPGA, along with the cameras, will
break down the ima
ges taken
by the cameras into information needed to determine the coordinates of the ball. The ball
will be identified by recognizing it as the largest collection of pixels similar to a
particular color. The FPGA will direct the robot how to move its cam
eras to obtain a
better view of the ball as well as direct the motor base how to move to intercept the ball.
Additionally, the FPGA will be responsible for calculating the height of the ball and
directing the catching arm to intercept.




Obstruction Avoida
nce: IR sensors attached to the base of the system will help detect any
ob
struction in the path of the base
,
hence avoiding all collisions. The arm will not extend
beyond the reach of the base.




Robotic "Hand
-
Eye" coordination: The visual data collected f
rom the cameras will be
processed in order to allow the catching arm to intercept the ball.




Zero footprint: Our original design called for a track to be installed in the place of
demonstration that a robotic catcher would move along. This would provide e
asy
stability, but we are being ambitious and want to create a device that can be set up “on
the spot.” We will be building our own wheeled unit.




Variable vertical
reach: A multi
-
rail system is used to allow the system
to move the
catching arm up and dow
n more rapidly than a single
-
rail
-
and
-
arm system would allow.




AC power
ed
:

Although a “production model” would do well to be powered by a
rechargeable battery, we decided to use an AC design so that we would never waste time
waiting for batteries to rechar
ge before we could run the next experiment.




Modular components: To allow for easier testing, different components will be able to
have their power supply shut off with an on/off switch and the logic will be soldered such
that the components will “plug int
o each other” so they can be removed for individual
adjustments and testing.



Benefits
:

1)

Multi
-
purpose end effectors: The system can have interchangeable end effectors,
including the currently used catching arm and potentially a bat that can make contact

with
the ball being tracked.


2)

Creates a platform with binocular vision that can be used to test theories and
mathematical algorithms. Future experiments could include attempts to teach the robot to
learn how things will bounce. That is to say, beyond le
arning how to follow this ball, the
robot could learn to observe how things move in space and predict the way future object
should move in space, and react accordingly. A robot moving about a facility with
binocular vision could be taught to understand ho
w its environment “should be” and can
raise the alarm if things ever appear out of place in a critical location. For example, the
sight of insulation peeling from the pipes and landing on the floor in a radiological zone
that humans would not want to trav
erse often might go unnoticed to a robot (with single
-
camera vision) that cannot see around the edges of the curled up pieces of insulation and
would simply see a little bit of shadow around an otherwise uniform surface.


3)

Use of Electric Power: The system
can be powered by simply plugging it into a socket in
the wall.


4)

A ball
-
collecting robot that understands where it is at in space and thus where it best
ought not wander could be of benefit and reduce the cost of hiring people to c
hase tennis
balls all day

long. With a different attachment, it could even suck up balls into a tube and
into a collection bag.


5)

Instead of having a group of firemen inflate a large

cushion a reliable robot capable of
catching a fast moving object (like a falling human from a bur
ning building) could be an
indispensible tool.

Our design could be put to immediate use testing out the feasibility of
different strategies that could later be used on a (more expensive) machine.


6)

A robot exploring an environment hostile to humans would d
o well to be able to judge
the distance of objects in front of it.




Design
:

Below is the block diagram and their description for the proposed system:



Block Diagram:









Block
Descriptions:



Power Supply
:


The power will be supplied by a

wall outlet. Internally, the power will be divided and
distributed according to the maximum needs of the each particular system and
component. Components that use different power and voltage ratings will have their own
step power feed from this bank of p
ower supplies.


FPGA:




The FPGA takes in the video stream from the two cameras. Based on Image Error
Correction algorithm it will instruct the servos to move to center the object in question in
the image plane of the cameras. This is done for tracking t
he object. Based on the angle
of the servos, the FPGA will instruct the body to move left or right based on the pan of
the cameras and it will instruct the arm to either move up or down based on the tilt of the
cameras.


Visual Mechanism
:


Cameras:

Two Ca
meras will be mounted on the base of the system. These cameras will
be used to serve as binocular vision for the system, providing visual data to the FPGA.


Servos Motors:

Servo motors will help replicate abilities similar to that of a human eye
-

up, down
and sideways. Similar to the human eye, the servo motors will allow the system
to estimate the position of the ball in three dimensions.


Motion Mechanism
:


Motors and Wheels:

The motor and a four wheel axel system will allow the system to
move sideways, c
onsequently increasing its catching range.



IR Sensors:

The IR sensors will be used to provide

distance

feedback to

on the sides of

body to
the FPGA, in order to avoid any collisions with obstructions.


Vertical Reach Mechanism
:


Vertical Rail System:

The

three rail mechanism allows the system to have increased
vertical range, including areas otherwise obstructed by the cameras.


Potentiometer
:
The potentiometer provides feedback to the FPGA, measuring vertical
motion as a function of voltage.


Performance

requirements
:


1)

Ball tracking test for each camera: Each camera must be able to accurately track the
location of the ball

2)

Calibration in accordance to the information relayed by the FPGA for:



IR Sensors



Motor Positions



Speed



Friction Compensation



Camera A
ngles

3)

Verification of the force exerted by the ball is within tolerance levels: Verify that the ball
does not exert force sufficient to rotate or topple system

4)

Accurate ranges for current and voltage ( using ammeters and potentiometers): to ensure
optimal
and safe performance

5)

Shut offs for specific components: allowing for testing of isolated components and as an
added safety feature

6)

Verification of processing speeds: to ensure adequate response time proportional to the
travel time of the ball

7)

Verify Reset



Verification


Testing Procedures
:



Test for Tracking Mechanism: An LCD screen will be employed to relay information
regarding the perceived location of the ball. This information will be tested against
known locations of the ball.




Test for Servos Motors:

Once the location of the ball is calculated, the system is
programmed to change camera angles such that the ball is maintained in the centre of the
camera frame. This will be tested by positioning the ball off
-
centre and checking for
accurate error correc
tion.




Test for Motion Mechanism: The system must relocate itself in order to position itself
under the ball. This movement can be tested by placing the ball at various locations of the
catching plane and testing for appropriate sideways movement.




Test fo
r Vertical Reach: The system must extend the catching apparatus to variable
heights. This process can be verified by placing the ball at varying heights in the catching
plane and analyzing system response.




Test for Complete System: The system must be able

to catch the ball when thrown under
specified conditions (such as distance from the system, velocity range of the ball, etc.)


Tolerance Analysis:


Tolerance analysis for the Visual Mechanism: The binocular vision component is integral to the
success of
our system. In order for the machine to be able to track a ball thrown at 10 m/s (speed
determined from stopwatch trials) from 10 meters away, we must first be able to track the ball
that rapidly with our vision system. If we use a camera with 90 fps, th
en a ball moving at that
speed will change position at approximately 11 cm per frame! This is truly a challenge even at
90 fps, so we will have to confirm two basic requirements. First, the servos must be able to
rotate the camera fast enough to keep the

ball in sight when it is being thrown past it while the
wheel system has been deactivated and the base is stationary. The camera should not lose sight
of the ball until the eye farthest from the ball can see its companion camera in its field of vision
(n
ear 85
o

of rotation from straight ahead).


Additionally, we will temporarily program a counter to tick up every time that our vision
processing algorithm finishes gathering data from a single image from a camera. This, along
with the system’s time, will b
e output to a display so that we can verify that we are truly
processing at 80 frames per second or faster ( 80 fps corresponds to 12.5 cm per frame). If
vision processing drops slower than this requirement, we will have to find ways to “cut the fat”
from

our vision processing algorithms.

Cost and Schedule

Cost Analysis:

Item

Quantity

Cost per unit (US$)

Total cost (US$)

NI sbRIO
-
9642 FPGA

1

2
,
999.00

2999.00

Sony XCD
-
V60CR
firewire camera

2

650.00

1300.00

Body and Arm
Assembly Parts

1

200.00

200.00

Se
rvo Motors with
pan/tilt assembly

2

75.00

150.00

Two 12 VDC motors,
wi
t
h

wheels & axles
mechanism

1

510
.00

510
.00

Total
Labor

3

engineers

24,000.00

72,000.00



Estimated Cost (US$)

77,
159
.00


Individual Labor Cost:
2
0 hrs/week x 12 weeks x $40/hr x 2.5

= $
24
,000


Completion
Schedule
:

Team Member

Week of

James

Preeti

Rene

14
-
Sep

Prepare Schematics

(FPGA Component)


Prepare schematics

(Visual Component)



Prepare schematics
(Motion and vertical
reach Component)



21
-
Sep

Design Review



Order P
arts

Design Review


Order Parts

Design Review


Order Parts

28
-
Sep

Working on code

Working on code


Build vertical reach
mechanism

5
-
Oct

Working on code

Working on code

Build Motion
Mechanism


12
-
Oct

Eyes fully tracking
Object



Frame Com
pletely built
and all on/off power
switches installed


Motors able to reach
desired positions when
commanded by FPGA



19
-
Oct

Integrate system
components


Integrate system
components



Integrate system
components



26
-
Oct


Test System


Test System

Test S
ystem


2
-
Nov

Mock
-
up Demos

Prepare for Mock
-
up
Presentation

Mock
-
up Demos

Prepare for Mock
-
up
Presentation

Mock
-
up Demos

Prepare for Mock
-
up
Presentation

9
-
Nov

Mock
-
up Presentation

Mock
-
up Presentation

Mock
-
up Presentation

16
-
Nov

Test
all
isolated
compo
nents


Test

all

isolated
components



Test

all

isolated
components



23
-
Nov

Test complete System


Test complete System



Test complete System



30
-
Nov

Demos & Final
Presentation

Demos & Final
Presentation

Demos & Final
Presentation

7
-
Dec

Final Papers &
Check
-
out

Final Papers & Check
-
out

Final Papers & Check
-
out