Embedded Vision: Creating a Next-Generation of Machines ... - Avnet

coatiarfΤεχνίτη Νοημοσύνη και Ρομποτική

17 Οκτ 2013 (πριν από 3 χρόνια και 5 μήνες)

106 εμφανίσεις

EmbEddEd Vision
Creating a Next-Generation
of Machines that “See”
Most of us have heard about the newest craze called Google Glass. Many of us have
seen the Microsoft Kinect for the Xbox 360 video game console. Some of us may
even have a car with a rear view camera, pedestrian detection or a lane departure
warning system. What you may not realize is that all of these devices have something in
common – embedded vision. What exactly is embedded vision, why will it revolutionize
many of the products and systems that exist today and why should you care?
By Jim Beneke, Vice President, Global Technical Marketing,
Avnet Electronics Marketing
Having the required
hardware pieces is
only half the battle.
Software and more
specifically, specialized
vision algorithms are
required to manipulate
and analyze the flood of
incoming video data.
In basic terms, embedded vision is the combination of an image sensor or camera with some
sort of embedded processing system. Both of these elements have existed for more than
40 years in some form or fashion, yet only recently have they come together in such a way as
to enable an entire new paradigm of machines that see. What we are observing today is the
result of a convergence in lower power, higher performance, smaller size and lower cost in
the key elements that make up vision-enabled systems. Combine this with improvements and
breakthroughs in software algorithms and data manipulation techniques, and the result is the
dramatic acceleration and adoption of embedded vision.
Elements of an
Embedded Vision System
Describing something as an embedded vision system is a very broad definition
encompassing many different features or capabilities. However, most vision-
based systems tend to include some variation of the following functions:
IMAGE SENSor/CAMErA Today many embedded vision systems make use of CMOS
image sensors. Dramatic improvements in these sensors have taken place since they were
first invented at NASA’s Jet Propulsion Laboratory in 1995. Resolution in terms of pixels and
speed in terms of frames per second are two areas where CMOS sensors have been able
to leverage Moore’s Law to their advantage. Power and cost have been decreasing over the
years as well, which is a primary driver in the expansion of embedded vision applications.
ON Semiconductor is one Avnet Electronics Marketing supplier that offers a family of image
sensors targeted at industrial applications. Specialized cameras that include image sensors
are also being developed, providing unique capabilities over a standard image or video
capture. Time-of-Flight (ToF) sensors are one example since they use the speed of light to
determine distances of various points in an image relative to the sensor. These sensors are
finding uses in gesture recognition, object classification and automotive safety.
ProCESSor Since we are talking about embedded vision, we are implying the use of
embedded processors verses PCs or workstations. This is an important distinction because only
in the last 15 years has the performance of these processors reached a level where they could
adequately handle real-time video. What has emerged over the last 10 years is a group of
specialized processors that implement unique architectures or dedicated accelerators specific
to image and video processing. General purpose Digital-Signal-Processors (DSPs) are giving
way to highly optimized video processors capable of performing very efficient pixel-based
processing and frame-based processing. These are often combined with ARM
®
cores which
provide the higher level processing or intelligence in the system as well as system management
and connectivity functions. Just like the CMOS image sensor, the embedded vision processor is
leveraging Moore’s Law and making significant improvements in processing capability, reduced
power, higher integration and reduced costs. Companies like Analog Devices, Freescale, Texas
Instruments and Xilinx all offer processing solutions tailored for embedded video applications.
MEMory Processing data from the image sensor or camera often requires the storage of
either all or some parts of the video data as it streams through the system. The density of the
memory is less a driver than the IO data bandwidth between the processor and the memory.
In the past, specialized memory such as video RAM was required to maintain the performance
needs of the processing system. Although effective, these memories included a cost premium.
With speed and density advances in DDR memory driven by the PC industry, we can now use
standard DDR2/DDR3 devices, yielding significant cost saving in the overall system budget.
SofTwArE/AlGorITHMS Having the required hardware pieces is only half the battle.
Software and more specifically, specialized vision algorithms are required to manipulate and
analyze the flood of incoming video data. In 2000, the process for developing and implementing
these algorithms changed. Spun from an Intel Research initiative started in 1999, Open Source
Computer Vision Library or OpenCV (www.opencv.org) was released to the public in 2000 as
an optimized, open source library of C/C++ functions centered on vision-based applications.
Since then, periodic releases of the OpenCV library has resulted in additional functionality and
further optimizations to the various vision algorithms, making them easier to port and run on
embedded processors. In addition to the free, open source OpenCV functions, commercially
developed vision libraries offer an alternative option. Many 3rd parties offer specialized vision
and video processing solutions for various applications. A model-based design methodology
from MathWorks provides another option with a complete system-level approach to designing
embedded vision systems. The Mathworks’ tools support everything from system modeling and
simulation to automatic code generation and hardware validation.
Previously printed in Avnet Electronics Marketing’s AXIOM publication, June 2013. All rights reserved. www.em.avnet.com/axiom
Embedded
Vision
Applications
Vision-enabled and embedded vision systems have been around for a
number of years. Anything with a camera could be classified as vision-
enabled. However, over the next three to five years, it is very likely
you will begin to see an explosion of embedded vision applications.
Driven be the advances in sensors, processors and software, these
systems will leverage the price, performance and power advantages
in an exponential fashion. Nearly every market will be impacted by
the technology, from industrial, medical, automotive, to consumer,
aerospace/defense and security. More importantly, there will be
a range of embedded vision systems offered, depending on the
processing performance needs and supportable product cost. Figure 1
shows how vision systems can be viewed relative to their image sensor
needs and processing performance requirements.
The sensor outputs images at some resolution of pixels, at some
interval (frames per second). This corresponds to how much
processing throughput is required in terms of megapixels per second.
Not all applications require the biggest and fastest devices to solve a
problem. If you want to track the movement of a person in a room,
you can likely achieve this with a low resolution (VGA) style image
sensor and a processing system analyzing movement at a couple
of frames per second. Likewise on the high-end, the introduction of
image sensors that can output hundreds of megapixel resolutions or
thousands of frames per second, are enabling high-precision machine
control and inspection equipment that was unheard of just 5 years ago.
Conclusion
Embedded vision is growing. Many next-generation products will
include some sort of vision capability to detect, recognize, analyze,
categorize, or track objects or people. At Avnet, we offer all the pieces
to help you get started. From the image sensor or camera to the
embedded processor and memory sub-system, we sell and support the
leaders in this evolving technology. We are one of the founding members
of the Embedded Vision Alliance (www.embeddedvisionalliance.com)
where you can obtain a wealth of information, tutorials and videos
related to embedded vision. We also offer Avnet created development
kits that support embedded vision design and prototyping.
The new FinBoard

Embedded Vision Development Kit
(www.finboard.org) provides a low-cost, vision optimized
Analog Devices Blackfin
®
BF609 processor in a complete kit
with software development tools, debugger and numerous
reference designs. FinBoard is ideal for creating low-to-mid
range machine vision, security and video analytics solutions.
Another option is Avnet’s ZedBoard (www.zedboard.org)
which is targeted at higher performance embedded vision
applications with its Xilinx Zynq
®
-7000 All Programmable SoC.
Be part of this next revolution and start exploring the endless possibilities that can enhance and differentiate
your future products. The future of machines that “see” is happening now.
finBoard Kit
figure 1
73
62
28
9
5
.4
BARCODE
SCANNER
Megapixels/Second
Resolution
Frames/Sec
QVGA
5
VGA
15
VGA
30
HD720
30
HD1080
30
VGA
240
GESTURE
DETECT
SLEEPY EYE,
LICENSE PLATE
DETECT
MACHINE VISION, FACIAL RECOG
SECURITY/SURVEILLANCE
HI-SPEED MACHINE VISION
Bar code scanning will soon employ embedded vision to better find
and read bar codes. Other systems will identify items that have no bar
codes on them and determine what they are through the use of vision
analytics. Security systems will no longer use keys and cards to identify
users, but will instead recognize users and owners by capturing an
image of their face. Safety systems will be more accurate and faster.
Appliances and instruments will no longer require you to press buttons
or touch them, but instead will see you and respond to your hand
movements or gestures. Robotics will become smarter and more
autonomous as they can better identify objects and their surroundings.
Cars will continue to become safer and easier to drive through the use
of specialized embedded vision systems both in-cabin and external.