An autonomous mobile robot is expected to move in its environment without colliding
with any obstacle or other robots. For safe navigation and quality of task performance, accurate
path planning is required with the complete knowledge of work environment. The quality and
accuracy of the path and trajectory planning of mobile robots greatly depend on the quality and
accuracy of the map available to robot.
In robotics localization is the process of updating the pose of a robot in an environment,
based on sensor readings , where as mapping addresses the problem of acquiring spatial
models of physical environments through mobile robots sensors. In Simultaneous Localization
and Mapping (SLAM), the developed map must be free of occlusion and falsified occupancy
for obstacle avoidance and quality of path planned. It should be free of errors and clearly
provide the position information of the robot as well as the objects of the workspace. Maps
developed by the robot itself, using on-board vision and sensory system, inherently has the
deficiencies due to line-of-sight view constraints of the robot. The presence of multiple
dynamic objects that not only change their own positions but also modify the environment,
make the accurate mapping task a great challenge.
During occlusion, all or portion of objects are invisible and may be at very low resolution.
It may be due to either static objects such as walls or dynamic objects such as robots in the
scene. Resolving occlusion problem is a challenging task in tracking and surveillance systems
. While tracking, objects are usually occluded by other objects and tracking system may fail.
Occlusion is one of highly dynamic and difficult problem for multi-robot localization and safe
navigation. It severely affects the map quality in the presence of multiple robots.
Falsified occupancy (FO) reduces erroneously the available floor plane for navigation
tasks in which due to viewing angle effect, in images an object covered more ground plane than
its actual 2-D base. This falsified occupancy which is a special type of occlusion (the occlusion
of ground Plane) becomes very sever in case of clutter and increases exponentially with the
increasing height of the objects. It may reach to limits where most of the free space of 2-D floor
plane gives the illusion of almost occupied. In such situations path planning and collision
avoidance for mobile robot become very difficult.
Occlusion and falsified occupancy are the major issues in simultaneous mapping and
localization techniques. Occlusion-free mapping is a difficult problem to solve . These are
key issues in cluttered environment mapping, robots visual navigation and surveillance
systems. To minimize ambiguities due to occlusion, better techniques need to be developed to
cope with the uncertainty in the maps developed for mobile robots.
In the last decade we have seen a great extension in the area of autonomous robot
applications, such as tele-robotics, industrial and workshop indoor environment . Mobile
robot localization is the issue of determining the pose of a robot relative to a given map of the
environment. It can be viewed as a problem of coordinates transformation. Localization is the
need for performing real world tasks, such as navigation toward a target position, material
transportation, environment monitoring, exploration etc. To perform complex navigation tasks
in indoor environment efficiently, an autonomous mobile robot must have the capability
to acquire and update map of the environment. Accurate localization is a prerequisite for
building a good map, and having an accurate map is essential for good localization . In
many applications, the mobile robot is provided with a priori map. However, maps that can be
used for longer period do not always exist, and even if available, their reliability and accuracy
decrease over time. These maps are useless if there are dynamic objects that have the capability
to move and manipulate the environment.
In most of the previous work mobile robots are equipped with wheel odometery,
ultrasonic, millimeter wave radar and or laser ranging . Dead reckoning, an incremental
technique based on the wheels odometery or encoder counts, has been used for decades. These
methods estimate the position by calculating the traversed distance, direction of motion, and
amount of time that a mobile robot has traveled. Dead reckoning is not sufficient to locate the
robot . Even smaller errors due to wheel slippage or mechanical tolerances accumulate over
time and fail the localization process . Sonar systems are low cost and small size however,
they are prone to errors due to environmental noise and multiple echoes from the objects. It can
fail if there are other mobile agents equipped with ultrasonic ranging system. For indoor
applications usually sonar sensors have wide beam output and thus the identification of the
echo origin is difficult. Multiple echoes and the other robots transmitted sonic waves make the
incoming genuine echo source uncertain
Millimeter wave radar and laser do not belong to the
human friendly equipment and are relatively costly -. The Laser sensor may not be
applicable for indoor environment due to human safety issues. GPS is usually applicable only
in outdoor applications and it may not be useful for indoor environment . Monocular vision
does not provide range information because such systems strongly rely on landmarks and
ranging sensory information. In some applications, images captured from different locations are
used to identify locations.
Performing a navigation task must have the ability to move itself from an initial position
to a desired one. In indoor navigation problem based on landmarks located in the environment
are some time difficult to implement due to possible obstruction of different objects. Therefore
navigation system based on wall mounted multiple cameras can be thought as an alternative to
In this thesis an independent vision system including a camera network connected with a
centralized computing facility for map building is used. Three walls mounted fixed cameras
that provide overlapping field of view of the work environment, captured environment images
synchronously. The following contributions were made.
• Two and three camera setups are discussed in detail.
• The captured images are processed for inverse projective transformation.
• Dynamic motion segmentation, adjustments of minor illumination variations and image
registration are performed before image fusion.
• A novel binarized image fusion technique is developed for the fusion of data in the aligned
images. The fused output image represents the top-view of the environment.
• Static and dynamic objects are detected, identified and localized using the fused image.
• A parametric vector containing the object coordinates, heading angle of moving objects and
mobility factor is generated for each object.
• Grid based map with dynamic occupancy values is developed.
• Navigation algorithm is developed using the developed map.
Using these maps, robots of the environment can localized themselves as well as the other
robots and object. Mobility factor and dynamic occupancy values may be used by robot in their
global path planning and navigation algorithms.
1.2 Thesis Organization
The thesis is organized as follows. Chapter 2 is dedicated for survey of vision system’s
sensors and configurations. These include ultrasonic, infrared, inertial sensors. Various camera
system configurations are discussed in detail with respect to their applications in robotics.
Chapter 3 presents a detailed review including localization, occlusion and falsified occupancy,
image fusion, mapping and navigation. In chapter 4 the proposed technique of image fusion for
resolving occlusion, FO, objects detection, identification and other related issues are discussed
in detail. Chapter 5 is dedicated for experimental validation and results of the proposed
algorithm. Chapter 6 presents the discussion and future recommendation.