Engineers' Guide to Embedded Linux & Android - Subscribe

estrapadetubacityMobile - Wireless

Dec 10, 2013 (3 years and 6 months ago)

Engineers’ Guide
to Embedded Linux
& Android
Annual Industry Guide

Solutions for engineers and embedded developers using
Embedded Linux and Android
Android is Definitely
Coming to Your Next
Embedded Design
Improve Linux Real-time Performance in
Multicore Devices with Light-Weight Threading
The Universal NDK
Gold Sponsors
Platinum Sponsor
Scan this
QR code
to subscribe
Just as penguins are superbly adapted to aquatic life, the Yocto based
Enea® Linux distribution is superbly adapted to Next Generation
Networking Infrastructure.
  
"#"" '!*,'##)8
 
#) !#"( #

 
 #, ')"'#%(* $!
!# !""'#!"
2 Engineers’ Guide to Embedded Linux and Android 2013
Welcome to the Engineers’ Guide to
Embedded Linux and Android 2013
Linux and Android, Like Peanut Butter and Jelly
For a while, I didn’t understand how Linux fit with Android, Java, or
Chrome (the OS, not the browser). Still don’t know what Chrome’s all
about, but it’s now crystal clear to me how Android, Linux and Java play
together. In fact, they play really nicely together. When trying to decide
which operating system is best for your next embedded design - Linux
or Android - it all comes down to the display. If your system is “headed”
and has a display, especially one with capacitive touch interface, then
Android is an excellent choice. That’s what it was designed to do.
In my interview with industry expert and educator Bill Gatliff “Android
is Definitely Coming to Your Next Embedded Design”, Bill points out
that Android is an ideal choice for “headed” systems, especially if they
roughly mimic the elements of a smartphone or tablet computer. When
run on top of Linux and sporting Java applications, Android offers a
world of possibilities. But for headless systems? He doesn’t recommend
it; instead, stick with Linux alone. Bill also points out that Linux can turn
in decent real time response if the schedulers are coded properly.
Author Christofferson from Enea agrees, and goes several steps further
in his article “Improve Linux Real-time Performance in Multicore Devices
with Light-Weight Threading”. He makes a case for PREEMPT_RT, a
package of real time extensions for Linux that takes big steps towards
fixing the interrupt latency with multicore devices. And for devs who
want even less latency than what Linux offers natively, LynuxWorks
unveils their Type Zero Hypervisor in “Type Zero Hypervisor - the
New Frontier in Embedded Virtualization”. One then runs Linux and/or
Android in a secure virtual environment.
Getting back to Android, Viosoft points out that Android is growing
faster than any other smartphone OS, with 100,000 apps added to the
Google Play app store in the four months ending June 2012. But all apps
aren’t created equal, since many start out being written for iOS then
ported to Android (iOS had “only” 63,000 apps in that same period, by
the way). But which CPU is the app to run on? ARM, AMD, MIPS? Author
Tran argues in favor of a uNDK - a universal native development kit that
abstracts the hardware by including native code for all popular CPUs.
This might be a great idea, as Android finds its way into automotive in-
vehicle infotainment (IVI) systems which use even more variety in CPUs
such as Qualcomm Snapdragon and nVidia Tegra.
Andrew Peterson of Mentor Graphics discusses the tradeoffs between
the OS choices in “Life in the fast lane: Linux or Android for automotive
infotainment systems?” He introduces GENIVI, an open interface con-
sortia developing Linux-based IVI standards and software interfaces. But
he also points out that Android now accounts for greater than 50 percent
of the worldwide smartphone sales (compared to Apple’s 25 percent),
and since IVI systems are smartphones on steroids - Android is a pretty
good choice in some instances.
Finally, other stories in this issue include: “Self-Protecting, Security-
Aware Mobile Applications” by Metaforic; an exclusive article from
ADLINK showing the amazing performance of Intel’s brand new data-
plane development kit (DPDK) and Crystal Forest network processor
solution (running Linux benchmarks, of course); and colleague and editor
John Blyler wraps up musing on Intel, smartphones, and BIOS in “Intel
Expands Semiconductor IP in Handset Bid”.
Obviously, there’s lots happening in the market for Android and Linux.
We’re thinking of expanding this catalog, so check us out on the web at
www.eecatalog/embeddedlinux for even more related content.
Senior Editor,
Engineers’ Guide to
Embedded Linux and Android
VP/Associate Publisher
Clair Bright
(415) 255-0390 ext. 15
Editorial Director
John Blyler
(503) 614-1082
Senior Editor
Chris A. Ciufo
Cheryl Coupé
Production Manager
Spryte Heithecker
Graphic Designers
Keith Kelly - Senior
Nicky Jacobson
Media Coordinator
Jenn Burkhardt
Senior Web Developer
Mariam Moattari
Advertising/Reprint Sales
VP/Associate Publisher
Embedded Electronics Media Group
Clair Bright
(415) 255-0390 ext. 15
Sales Manager
Michael Cloward
415) 255-0390 ext. 17
Jenna Johnson
To Subscribe
Extension Media, LLC
Corporate Office
President and Publisher
Vince Ridley
Vice President, Sales
Embedded Electronics Media Group
Clair Bright
Vice President,
Marketing and Product Development
Karen Murray
Vice President, Business Development
Melissa Sterling
Special Thanks to Our Sponsors
The Engineers’ Guide to Embedded Linux and Android is published by Extension Media LLC. Extension
Media makes no warranty for the use of its products and assumes no responsibility for any errors
which may appear in this Catalog nor does it make a commitment to update the information contained
herein. Engineers’ Guide to Embedded Linux and Android is Copyright
2013 Extension Media LLC. No
information in this Catalog may be reproduced without expressed written permission from Extension
Media @ 1786 18th Street, San Francisco, CA 94107-2343.
All registered trademarks and trademarks included in this Catalog are held by their respective
companies. Every attempt was made to include all trademarks and registered trademarks where
indicated by their companies.
4 Engineers’ Guide to Embedded Linux and Android 2013
Android is Definitely Coming to Your Next Embedded Design
By Chris A. Ciufo, Editor
Improve Linux Real-time Performance in Multicore Devices with Light-Weight
By Michael Christofferson, Enea
The Universal NDK
By Hieu T. Tran, Viosoft Corporation
Life in the Fast Lane: Linux or Android for Automotive Infotainment Systems?
By Andrew Patterson, Mentor Graphics
Type Zero Hypervisor – the New Frontier in Embedded Virtualization
By Will Keegan and Arun Subbarao, LynuxWorks, Inc.
Needed: Self-Protecting, Security-Aware Mobile Applications with Anti-Tamper
By Andrew McLennan, Metaforic
Consolidating Packet Forwarding Services with Data-Plane Development Software
By Jack Lin, Yunxia Guo, and Xiang Li, ADLINK
Intel Expands Semiconductor IP in Handset Bid
By John Blyler
Products and Services
Hardware / Hardware Tools
EMAC, inc.
PPC E4+ Compact Panel PC
Software Products
Networking / Communication Packages
TeamF1, Inc.
SecureF1rst CPE Gateway Solution
SecureF1rst Network Attached Storage
SecureF1rst Security Gateway Solution ..........39
MontaVista has over 50 Million devices in the marketplace ranging from cell phones to Automobiles to medi-
cal devices. Our domain expertize spans high change, fast development, short life devices in the consumer
market to large projects involving 100s of people over many years in the automotive industry to the high per-
formance, high availability demands of long life, long term support requirements of the Network Infrastructure
MontaVista has been the innovator in the Embedded Linux market and now offers that knowledge and experi-
ence via our Professional Services organization to the market: kernel and design engineering, developers of
high availability systems, porting and migration services, creating build systems that integrate standardized
packages, development of unique test & validation services for your project and a host of customized ser-
vices and requirements. MontaVista understands the challenges you are facing and the pressures to deliver
on-time, on-budget and with the highest level of reliability and quality. MontaVista can assist at every stage of
development, deployment and on-going long
term maintenance and support of your final
product. We can work with your team as an
integrated group member, project manage the
task or take responsibility for the delivery of a
finished component or total solution. We make
sure you succeed!
MontaVista has multiple standardized offerings
around Architecture and Design Services,
Developing Customer Test & Validation Services,
System Performance and Optimization,
Virtualization Design and Optimization Strategies.
Within each offering we conduct an evaluation
and design review, recommend a Proof of
Concept, then create SOW using our standard
practices within these areas.
purpose technology and your specific use case and architecture
agile methodologies
ment, and rollout
6 Engineers’ Guide to Embedded Linux and Android 2013
Editor’s note: I first met Bill Gatliff at ESC Boston 2011 where
I was responsible for content for UBM Electronics’ embedded
franchise and events. Bill and a colleague Karim Yaghmour
taught the Android Certification Program curriculum because
they’re simply the best in the business. Bill is an experienced
and outspoken expert on Linux and all things embedded,
including RTOSs and applying Android to myriad embedded
systems besides handsets. Bill ’s comments that follow are
insightful and frequently a bit, shall we say, “opinionated”.
But that doesn’t mean he’s wrong. Read on and see if you agree.
Edited excerpts follow.
EECatalog: Given the choice for
a designer’s future embedded
design, what will be the next
OS: a Windows variant, an
RTOS, Linux, Android or some-
thing else?
Bill Gatliff: I think Win-
dows would be a distant third
(haven’t they always been?), but you’d really need to know
the application before deciding among the other choices.
An RTOS still makes sense for certain space-constrained
and/or real time situations, but Linux is increasingly an
option here. If the system has a UI, the clear winner is
Android by far – but Android doesn’t bring much to head-
less systems.
If the system called for “real time”, it’s important to define
what that means. A fairly ordinary Linux or Android
distribution can support sub-millisecond scheduling pre-
cision if the developer knows how to code for it with good
device drivers and the two POSIX.1b schedulers that every
modern Linux kernel provides. Simple function calls evoke
them, and you can get performance that’s fairly similar to
an RTOS. For instance, some years ago I delivered an Intel
PXA270-based RISC system clocked at 330 MHz which
demonstrated a scheduling precision of between 120 - 150
microseconds. It was a modified embedded Debian Linux
distro, I think it was the 2.6.3x kernel, and we were all very
impressed and pleased with this real-time performance. Of
course, an RTOS can often provide real time performance
that’s an order of magnitude better than Linux for situa-
tions when you truly need that.
I want to point something out about Android, though: it has no
provisions for prioritized scheduling whatsoever. It was never
intended for real time work, and if someone is demanding that
from Android by doing, say, [video] rendering in software,
they’re expecting too much. The right way to do it is with hard-
ware acceleration, just like what’s done in smartphones where
Android only draws a blank box on the screen and then just
babysits some dedicated video hardware.
In general, if there’s a real time requirement in your system
and you feel you have to
implement it in Java up inside
of Android, then you’re prob-
ably using the wrong platform
– or more likely, you’ve
divided up the solution in
the wrong way. Take a closer
look at your hardware, and at
sending more work over to a
Linux process where you have
richer threading, prioritiza-
tion, and scheduling options available. I have surprised
several clients lately with the positive results I can achieve
by confining Android to what it does best, and leaving the
rest to Linux and the platform hardware underneath.
EECatalog: Contrast the security differences between
Linux and Android as far as data in flight, data at rest, or
the possibility of an application going rogue.
Bill Gatliff: Linux implements protected memory and the
well-known “user, group, other” file system access restric-
tions, both of which Android builds upon. Android adds to
that a specialized memory-sharing API, and cryptographic
signature checking so you can authenticate that an appli-
cation came from where it said it comes from. But Android
still uses the standard Linux process model to implement
the bulk of its low-level security, because Android has no
choice: to Linux, Android – and any activity being super-
vised by Android – is just another user program.
But what is interesting, and you talked about it when you
mentioned Green Hill’s “padded cell” concept [for their RTOS
INTEGRITY], is the rise of virtualization. If the hardware is
capable, then Linux or any other operating system can use
Android is Definitely Coming to
Your Next Embedded Design
With Android shipping on more phones than ever, it’s becoming a compelling
embedded choice. Android and Linux will emerge as a proven, clearly superior
combination for many embedded systems.
By Chris A. Ciufo, Editor
“Done well virtualization
creates opportunities that
Linux and Android can really
take advantage of.”
8 Engineers’ Guide to Embedded Linux and Android 2013
that to provide different implementations of “security”, some
of which are more useful than others in embedded systems.
If you look at a lot of the really big security breaches that
we hear about in the press, many of them are successful
attacks at the application level: a server gives up secrets
that it shouldn’t, for example. Those servers are probably
coincidentally running on virtual machines, so clearly vir-
tualization isn’t a direct, automatic way to protect yourself
from deploying an application with a fundamentally weak
security model or flawed implementation. Android doesn’t
change anything here.
But virtualization can be very helpful for simplifying and
hardening the implementation of an embedded system
that already has a strong security model. You could hide
the really sensitive data within a virtual machine that
you cannot penetrate even with a rogue program – the
hardware simply will not allow it. That sensitive data can
could be payment or identity information, for example, or
it could be firmware for an FCC-approved radio that can’t
be modified without recertification.
Done well, this use of virtual-
ization creates opportunities
that Linux and Android can
really take advantage of: highly-
differentiated devices, with
the simultaneous assurance
that the scary bits underneath
are still as they were when the
ODM received the hardware.
You don’t just tolerate the
hacking community, you embrace them.
EECatalog: Talk about the relevance of the various “fla-
vors” of Android and if one is a clear winner for embedded
applications that are neither a smartphone nor a tablet.
Bill Gatliff: So you’re referring to the different API
releases of Android, such as from Gingerbread 2.3 to
Ice Cream Sandwich [4.0.1] to the latest Jelly Bean 4.1
just released in July of this year. Android is still under
active development and the latest is generally the best.
I’m finding the performance of Jelly Bean to be much
smoother than Ice Cream Sandwich, in particular. They’re
all getting more complicated in how their APIs deal with
the underlying hardware, especially when video and audio
acceleration takes place. Android is also getting more
sophisticated in how it talks to OpenGL and sensors – this
is good, because the closer Android gets to that kind of
hardware, the more you can “export “ these capabilities up
to the application developers.
Although I said the newer releases are the obvious choices,
once a release comes out it always takes everyone a while
to get their head around “how is the hardware abstraction
layer in this release different from the previous release?”
There’s still quite a bit of churn there so it takes time
before someone can bolt it onto an embedded system. My
general advice is to start with what’s current, and then
assess each new release to see if there’s anything there
of value to your situation. And learn to use tools like git
and repo from the beginning, so that when a migration is
called for, your work is much, much easier.
EECatalog: How does Android deal with “headless” systems?
Bill Gatliff: As I said earlier, if a system doesn’t have a UI
(and this might include a touchscreen like in a smartphone
or IVI system), then Android may not make sense. Some
people still seem to think that Android has something to
offer in headless systems, perhaps because its notion of
“packages” also allows you to plug and unplug applications
even when you don’t have a UI. This is a weak justification
for using Android; a lot of the Android framework is related
to visualization and you’d be throwing away huge chunks
[of Android] if the system doesn’t have a touch panel.
Note that Android’s package
system is strictly for man-
aging Android application
packages; Android makes
no attempt to manage the
underlying operating system.
Specifically, Android’s pack-
aging system can’t be used
to update the Linux kernel,
or replace a program to fix a
bug. Android literally has no
story for these parts of an embedded system, and one of
my big projects right now is to address this concern in a
standard, reusable, public way. Hint: run Android on top
of a truly package-managed Linux operating system.
EECatalog: Let’s talk about application code migration,
such as from a system using an RTOS to one that migrates
to Linux. Will we see this trend continuing to Android?
What are the challenges and use cases?
Bill Gatliff: RTOS developers move to Linux for a lot of
reasons, some of which are related to the options available
for user interfaces. If that’s a motivation, then Android
makes sense because it provides a robust, cohesive GUI
implementation but still allows the bulk of the system’s
overall implementation to remain under Linux where it
probably belongs. If you go over to Linux without Android,
however, then you have to cobble together a lot what
Android already has in place. That may work well for some
developers, but it’s a bewildering array of choices for most.
But this is really, really important to understand: to get the
best from Android you have to partition your application
well. Android explicitly omits the real-time performance
“Android explicitly omits
the real-time performance
options you have with Linux
or a true RTOS.”
December 4-7, 2012
San Francisco Bay Area
Choose from more than
classes and workshops!
Follow us:
Register Early

Learn from the top Android experts,
including speakers straight from!

Attend sessions that cover app development,
deployment, management, design and more

Network and connect with hundreds of
experienced developers and engineers
like yourself
“AnDevCon is a fantastic conference! There is no better
place to experience the latest and greatest technologies
and techniques in the field of Android development. If you
attend one conference this year, this one should be it!”
—Jay Dellinger, Senior Software Engineer, Manheim

is a trademark of BZ Media LLC. Android

is a trademark of Google Inc. Google’s Android Robot is used under terms of the Creative Commons 3.0 Attribution License.
BZ Media
Get the best real-world
Android developer training anywhere!
10 Engineers’ Guide to Embedded Linux and Android 2013
options you have with Linux or a true RTOS. This isn’t
usually a problem, since if you are doing, for example, real-
time visualization then you will almost certainly need to
hand the work over to hardware anyway – and Android
already deals with that use case really well. And the softer
real-time constraints can usually be met by the real-time
schedulers that Linux implements today, supervising pro-
cesses running alongside Android.
It’s a current misconception that if an embedded system is
running Android, then everything that runs on the plat-
form has to run in Android and be written in Java. If you
try that, you are setting yourself up for failure by asking
Android to do things it just isn’t designed to do. That was
part of the point Karim and I were trying to make at last
year’s ESC Boston, and I’ve come up with even better ideas
since then that I will be bringing back to Boston (and else-
where) this year. I’m trying to get in front of anyone who
is willing to listen, actually.
Android is giving developers yet another good reason to look at
Linux, and the uptake has been so great that there just hasn’t
been time to create a good set of best-practices for understanding
and utilizing Android for embedded work. I’m hard at work with
my clients, who are eager to see the results as quickly as I can roll
them out. But I wonder how many other projects are struggling
right now. I think that in another year or so, Android and Linux
will emerge as a proven, clearly superior combination for many
embedded systems. I also think that Android in particular will
attract a wave of non-traditional embedded developers seeking
to make new, interesting devices that are as easy to program as
their cell phones. I know I’m not the only one working as hard
as possible to bring this future into reality as quickly as possible.
It’s an exciting time to do what I do, honestly.
Chris A. Ciufo is senior editor for embedded content at
Extension Media, which includes the EECatalog print
and digital publications and website, Embedded In-
tel® Solutions, and other related blogs and embedded
channels. He has 29 years of embedded technology
experience split between the semiconductor industry
(AMD, Sharp Microelectronics) and the defense industry (VISTA
Controls and Dy4 Systems), and in content creation. He co-founded
and ran COTS Journal, created and ran Military Embedded Sys-
tems, and most recently oversaw the Embedded franchise at UBM
Electronics. He’s considered the foremost expert on critically applying
COTS to the military and aerospace industries, and is a sought-after
speaker at tech conferences. He has degrees in electrical engineering,
and in materials science, emphasizing solid state physics. He can be
reached at
Bill Gatliff is a freelance embedded Linux and
Android consultant, developer, and a Dis-
tinguished Faculty member of the Embedded
Systems/DESIGN West and East Conferences. He
can be reached via email to
Being able to fly cross-country
using a laptop all the way without
plugging it in is one thing. Being
able to fly across the Pacific Ocean
is quite another.
The race is on not just to extend
battery life, but to extend it
while actually doing something
useful on all mobile devices,
whether that’s a PC or a smart
phone. That requires a significant
amount of specialization in both the processor and the software.

Lenovo’s announcement this week of its ThinkPad X1 Hybrid is a
case in point. The laptop includes something called Instant Media
Mode, which it calls a second PC. Based on a dual-core Qualcomm
processor running Linux, this chip can be used to watch videos,
listen to music and surf the Internet. That still leaves the regular
Intel chip to do the bulk of the heavy lifting, but it’s an interesting
Lenovo isn’t the first company to come up with this idea, of
course. Dell introduced a similar device back in 2009. The
current iteration, called Latitude ON and available in its
lineup, uses an ARM Cortex M3 core running in a Broadcom
chip to achieve what ARM claims is multi-day battery life.

This also helps explain why the netbook market seg-
ment has largely disappeared overnight, wedged out by
tablets on one side and long-life laptops on the other. Interest-
ingly, ARM seems to be the common thread in all of these.

Read the full story at:
Power Bits: Last Laptop Standing, Bacteria Power
Road Warrior Tools
By Ed Sperling 11
There has been much focus in the last decade on improving
Linux real-time performance and behavior, most notably
the PREEMPT_RT Linux real-time extensions. And more
recently there has been much work on Linux user-space
solutions for multicore devices that enable direct access
from user space to underlying hardware, thereby avoiding
additional overhead of involving the Linux kernel in user-
space applications. These user-space extensions (and there
are several) have been primarily driven by the telecom/
networking high performance IP packet processing market
for so-called “bare metal”
implementations, wherein a
Linux user-space application
in a multicore device can
mimic the performance of an
“OS-less” solution, namely
a simple run-to-completion,
polling loop on each core for
packet processing. And while
that goal has been essentially
met, the solution is still for a
very special use case.
Are there other use cases that
demand performance improve-
ments not completely addressed
by the above? If so, then what
are these use cases, and are
there further Linux real-time improvements that can be applied?
The answer is “yes,” with Linux user space light-weight threading
(LWT). So let’s examine the issues with respect to real-time
Linux, and how light-weight threading can be a solution for some
applications. The focus here is driven by telecom, networking,
or general communications applications, on which Enea focuses
its technology. But overall, this focus on light-weight threading
could be of benefit to many markets.
Real-time Linux and the Problems It Solves
Over the last 10 years, Linux has made some significant
improvements in real-time performance and behavior that
address a wide range of applications. These are summa-
rized as follows:
Perhaps the most notable achievement of real-time
extensions for Linux, the PREEMPT_RT package solves a
particularly nasty problem in Linux for multicore devices,
namely “interrupt latency.” There is very high overhead
in servicing interrupts in the Linux kernel before passing
the event/data to the real user-space applications – this
overhead tends to delay other interrupts, therefore
increasing the overall latency as measured by the time
the interrupt occurs to the receiver of the information
of the interrupt for processing. Likewise, there are many
so-called “critical sections”
in the Linux kernel wherein
interrupts are disabled
via spin locks. The overall
interrupt latency from the
standard Linux kernel does
not match the most serious
interrupt latency require-
ments for many real-time
applications, especially
in radio access networks
(mobile) and mobile core
infrastructure that demand
worst-case interrupt latency
in the range of 20-30 micro-
seconds. And this applies to
many other market applica-
tions. In a quick “nutshell ”
PREEMPT_RT solves this problem by:
schedulable threads, so that Linux kernel interrupt-level
processing is minimal and so that new interrupts may be
serviced without waiting for the previous interrupt handling
to be completed. Interrupt handling then becomes priority
driven, with the highest priority ones completed first as per
the user desires.
kernel into mutexes that then allow other kernel threads to
run in lieu of the kernel space spin lock.
Improve Linux Real-time
Performance in Multicore Devices
with Light-Weight Threading
The history of real-time embedded Linux has been to prove that Linux can operate
as well as a traditional RTOS. Linux light-weight threading brings it closer to the
goal for the most serious telecom/networking applications.
By Michael Christofferson, Enea
What is missing from
the real-time Linux
solutions survey is a
serious examination of
the usefulness of multi-
threading in real-time
embedded applications.
12 Engineers’ Guide to Embedded Linux and Android 2013
Basically, PREEMPT_RT has had some success by reducing
overall interrupt latency to very high performance
real-time standards, and this helps very many Linux appli-
cations. Which ones? Read on.
User-space Linux adaptations
As mentioned above, recently there has been much work on Linux
user-space applications. The idea is to allow user-space applica-
tions, where all the effort is placed by Linux users on their value
add, to avoid the overhead of the Linux kernel itself for some
specific device/interrupt interactions. Linux has a model that
provides much protection of the user-space application from the
kernel, wherein all user-space operations, including threads,
always map to the Linux kernel for processing its requests for
I/O. This gives Linux its robust behavior and characteristics.
But even with PREEMPT_RT, for very high data-processing per-
formance applications, Linux falls short because a Linux kernel
context switch is always needed for accessing the hardware
directly. User-space Linux implementations give the application
direct access to HW and interrupts without the involvement
of the Linux kernel, with a tremendous gain in performance.
But this performance is only gained in very high I/O-intensive
environments. Most Linux user-space adaptations focus on
single-threaded applications,
like high-performance packet
processing, wherein there is
only one thread under Linux
used to emulate “OS-less” per-
formance in multicore devices.
The Multi-Threading
What is missing from the
real-time Linux solutions
survey is a serious examination of the usefulness of multi-
threading in real-time embedded applications. Long before
Linux came along – in fact, in the early 1980s – there arose
the need for embedded real-time operating systems (RTOS)
designed for low-latency, high-throughput, seriously real-
time applications. The OS landscape has changed but the
requirements have not. These RTOS solutions featured the
kinds of performance, behavior and characteristics that
Linux has been trying to catch up with for the last 10+
years. This is not a pitch for the return of the RTOS, as
good as they were. The overall Linux value in real-time
embedded solutions in terms of portability, vast ecosystem
of applications and device support and general support is
unmatched by any RTOS. There are two real questions:
multi-threading performance, behavior and characteristics
to Linux so that we can raise the bar? The key is to under-
stand the Linux multi-threading implementation versus
RTOS, and then see what can be done.
Why is Multi-threading Important?
Multi-threading requirements for real-time arose over
30 years ago as computer solutions software designers
were facing complex issues that could not be solved by
single-threaded solutions. Solutions that required that a
single application had multiple tasks, perhaps some com-
putational and some I/O-driven, but all closely coupled in
terms of the overall execution of the task. But multiple
tasks in a closely coupled environment means that there
should be some sharing of CPU time for overall CPU uti-
lization effectiveness. In many such applications, some
operations had to be blocked, waiting for some I/O event
or other communication from another application. So
simple executives that could handle multiple threads with
thread blocking and with low-latency communications
amongst threads arose.
Not all real time applications require significant multi-
threading support. This article does not attempt to
categorize all of those. But clearly among these applica-
tions that do require this are any kind of complex protocol
that induces “wait-states” in the protocol – i.e., wait for a
response or an event that allows the application to proceed.
In lieu of that response or
event, then the application
should cede control of the
CPU to allow other similar
threads to run.
So perhaps the above tutorial
sounds simple to many of you.
The important thing to note
is that many, many providers
of mobile infrastructure and
core network equipment have come to the conclusion that
while Linux is the choice for current or future systems, Linux
as currently constituted does not quite measure up. Why not?
Linux Multi-threading with PTHREADS
Pthreads was created by the Portable Operating Systems
Interface (POSIX) initiative from IEEE to address the high-
performance, multi-threading problem in Unix, and hence
adapted by Linux which is in its earliest form, a portable
Unix implementation for enterprise and now for embedded.
The pthreads model was created to address the problem
of the original Unix Fork/Join model for creation of Unix
“child” processes. The Unix process model is very heavy
weight, as it involves creation (and potential deletion) of
whole memory-protected environments as well as an exe-
cution mode. A lighter-weight model for multiple threads
under Unix was needed; hence pthreads.
But the Unix (and hence Linux) model was designed for
complete separation of the kernel and the user-space appli-
cations, one of its advantages in protection, security and
reliability over other implementations, including RTOS
Efficient light-weight
threading (LWT) is the next
Linux real-time performance
and behavior issue. 13
over the last 10 or so years. In essence, this means that
every pthread in Linux user space is mirrored by a Linux
kernel thread for all or most Linux system-call and espe-
cially device-driver access from user space. But user space
is where virtually all embedded Linux real-time applica-
tions reside as OEMs build out their products without GPL
contamination. So in every case, use of pthreads involves
the invocation of Linux kernel, adding extra overhead of
what could possibly be a native implementation.
But wait a second, you say. What about the Linux real-
time extensions mentioned above? Well, PREEMPT_RT
addresses many issues inside the Linux kernel with
respect to responsiveness, but it doesn’t really address
multi-threading. User-space Linux implementations
address the device driver/interrupt performance issues,
but they don’t really doesn’t address the multi-threading
issue. Linux real-time containers address some of this, but
real-time containers are simply a user-space Linux virtual-
ization technique above standard Linux that doesn’t really
address the fundamental multi-threading issue.
Light-Weight Threading (LWT) – The Real
Solution for Complex Linux Applications
There are many light-weight threading models that have
been proposed for Linux, but none of them have really
caught on. Why? Because most of these are not very robust.
What is really needed for the next-generation Linux solu-
tions that involve complex multi-threading applications
is a completely new Linux model for user-space Linux
applications. This solution, called Linux light-weight
threading (LWT), is outlined below (Figure 1). Put a high-
performance, low-overhead, multi-threading scheduler in
Linux user space, over a single pthread. Why?
entities that Linux knows.
for the permanently running pthread. The pthread never
gets suspended as the user-space scheduler maintains
control – except in power-save scenarios. This is another
topic outside this article.
as some of the traditional RTOS high-performance, low-
latency implementations without any involvement of the
Linux kernel.
Linux implementations for direct hardware access. Again,
no Linux kernel involvement.
An LWT solution as described above, will deliver dramatic
performance increases in any Linux real-time application.
Enea has done some prototypes of the LWT described
above that show over 10x the performance compared to
Linux pthreads on scheduler overhead, specifically with
regard to context switching and inter-thread messaging/
communications latency.
But above and beyond scheduling performance and
inter-thread communications, what should a LWT solu-
tion bring? There is more to the LWT concept than just
.superiority over Linux ptheads in performance (Figure 2).
What about the concept of robustness of the solution? The
following additional Linux constructs are also needed as
time-honored RTOS real-time solutions:
Figure 1: Architectural view of the Linux light-weight threading model on a multicore device
14 Engineers’ Guide to Embedded Linux and Android 2013
Architectural View of the Linux Light-Weight
Threading Model on a Multicore Device
The architectural view of an LWT implementation is as
follows. A Linux process that involves a whole shared
memory space may span many multicore cores. The LWT
model, for maximum efficiency, requires a single pthread
within a Linux process to be locked to a core, but that
is not specifically required. Once the LWT is locked to a
pthread, it can migrate to any core that Linux SMP desires.
Efficient light-weight threading (LWT) is the next Linux
real-time performance and behavior issue. Again, not all
real-time applications need a powerful LWT-like solution.
But some, especially in telecom/networking and especially
those that need some of the complex networking proto-
cols in radio access networks, mobile infrastructure core/
edge, or any other markets that that have similar real-
time requirements, could benefit from Linux light-weight
threading – the next-generation Linux real-time exten-
sion. Again, the entire history of real-time embedded
Linux has been to prove that Linux can operate as well
as the traditional RTOS solutions. Linux has made some
strides, but from this author’s perspective, Linux in the
most serious telecom/networking applications is not quite
there yet. But perhaps with Linux light-weight threading
we are getting closer to the goal. In conclusion, one focus
of the Linux real-time embedded industry is for solutions
for the hardest real-time applications. This goal is is
depicted in the following graphic:
Mr. Christofferson, Enea director of product
marketing, has over 30 years’ experience in
software development for deeply embedded tele-
com or networking systems. He spent the first
8 years of his career in the defense industry in
SIGINT/COMINT systems. That was followed
by 9 years in the telecom market working with such technolo-
gies as packet switching, SS7, SONET, fiber in the loop and
DSL. For the past 16 years, Mr. Christofferson worked in
product management, marketing and business development
for leading industry RTOS, embedded development tools and
middleware providers such as Microtec, Mentor Graphics and
now Enea, for whom he has served since 1998.
Figure 2: Light-weight threading and Linux concept – best of Linux and RTOS 15
The Tale of Two Operating Systems
Android and iOS have dominated the smartphone market,
accounting for eight out of every ten phones shipped
according to IDC. The good news: since Google acquired
Android, Inc. from Andy Rubin and company seven years
ago, shipment figures for Android handsets have been
staggering, exceeding 90 million units for Q1 2012 alone.
The aggregated Android market outpaced Apple iOS by a
factor of over two to one. The app market for Android also
seems to be doing well. Recent reports by app analytics
firm Distimo puts the growth of Android apps ahead of
iPhone: there were 100K apps added to Google Play during
the last four months ending June 2012, compared with
63K apps for the Apple App Store in the same period.
But Android is not without challenges, and fragmenta-
tion is at the top of that
list. Android is fragmented
on several different levels:
Android devices vary in price
and quality due to the fact
that they are designed and
sold by a growing legion of
vendors. While Samsung
leads the pack with products
that rival those from Apple
in features and build quality,
there is no short supply of
sub-$100 Android devices
that barely stay running
after a few charges. The
Android device market is
overcrowded, and Google’s recent announcement of the
Nexus 7 Tablet at Google IO all but underscores this point.
Another of Android’s challenges is version fragmenta-
tion. Android phones are powered by different versions
of Android, since vendors are less incentivized to upgrade
firmware on older phones – vendors would rather drive
consumers to the latest devices equipped with recent
Android firmware. Lastly, unlike iPad and iPhone, Android
handsets and tablets are driven by multiple CPU architec-
tures, the majority being ARM-based, followed closely by
AMD64/x86 and MIPS.
The consequence of fragmentation with vendors, versions
and processors in the Android market is an inconsistent
user experience and a less than stellar customer loyalty
to the Android brand compared to that for Apple devices.
Follow the money trail and things are even less rosy for
the Android camp. The same report that touts the growth
of Android shipments and apps also acknowledges that
the same top 200 apps generate six times the revenue for
vendors on iOS than Android .
This article makes a case for an approach to Android
development that results in higher performance apps that
are also easily deployable on iOS.
WORA - The Portable Android Application
For maximum portability among Android devices, a typical
Android app is written in Java. Compiled into Java byte-
code and packaged in an .apk, the app can be downloaded
and run on any Android-compatible device regardless of
its underlying CPU architecture; hence “write once run
anywhere (WORA)” porta-
bility. Java bytecode within
the Android app is processed
by a “Java royalty-elimin-
inator” gadget– known as
dexopt (Figure 1) – the first
time the app runs. Dexopt
converts machine-neutral
bytecode into dexopcodes for
the Dalvik virtual machine.
Java bytecode is stack-based
while dex is register-based
and slightly more compact.
Tomato, tomahto. The point
is that Dalvikdexis not Sun
Java bytecode, and hence is
totally royalty-free, or so says Google.
From a technical perspective, portability of Android apps
(and generally all Java apps) comes at a perceived cost.
Since Dex opcodes are pseudo machine instructions that
are interpreted by the Dalvik VM, Dex and Dalvik have fre-
quently been a well-known and lamented source of blame
for the performance of Android applications, whether
deserving or not.
WOSE - The Profitable Android Application
No offense to Google, but application designers with profit
aspirations often set their sights beyond Android. Most
profitable apps were first written for iOS and ported to
Android. In less likely scenarios, apps created first for
The Universal NDK
An Approach to Creating High-Performance Native Android Applications
By Hieu T. Tran, Viosoft Corporation
Picture Grandma having
to figure out if she has an
AMD64 or MIPS-based tablet
so that she can download
the right version of Skype to
chat with Joey.
16 Engineers’ Guide to Embedded Linux and Android 2013
Android are often designed up front with downstream
motive for iOS.
Despite their successes in the market, the two OSes – that
is if you consider Android to be an OS – have little in
common in ways of application development. In contrast
to the interpretive nature of Android, iOS apps are written
in Objective C, a high-level, object-oriented language that
combines elements of Smalltalk and C. Compiled into
machine code, iOS apps can run full-speed on the platform,
thus lending to the performance and responsiveness that
are cornerstone to the success of Apple devices. Against
Figure 1: Dr. Doofenshmirtz’s Java Royalty-elim-eliminator
Figure 2: Typical Android WOSE App 17
this backdrop, it’s imprudent from a commercial perspec-
tive for developers to design and implement serious apps
in pure Java (slower), and against the Android API (more
difficult to port to iOS). Such a move would entail a com-
plete rewrite, and an ongoing effort to maintain separate
bodies of source code when porting such an app to iOS.
Instead, developers of high-grossing apps – predominantly
games – have empirically designed their programs in such
a way that the apps can be re-targeted for both Android
and iOS with minimal efforts, achieving “write once sell
everywhere (WOSE)” profitability.
WOSE apps are typically partitioned into two planes: the
functional plane, which handles broadcasted messages
or events as well as proxies activities between the app
and the platform services;and the performance plane,
which processes data and performs number-crunching
operations (Figure 2). In the context of a gaming app, the
functional plane decides whether and how to alert users
if an incoming call occurs in the middle of a game, while
the performance plane abstracts goodies like 2D and 3D
rendering and physics engines.
On Android devices, the functional plane can be imple-
mented in Java and completely rewritten on iOS while
the performance plane can be implemented in C/C++ or
even Objective C with the intent of being source-level
compatible between the two OSes. This is where the
Android Native Development Kit (NDK) comes in. The
NDK is a companion tool to the Android SDK that enables
developers to compile the performance plane portions of
Android applications into native code for the purpose of
accelerating performance and achieving cross OS porta-
bility. The NDK includes the GNU compiler tool-chain,
header files and pre-built libraries that are specific to
given processor architectures. Using the NDK, Android
developers can write C/C++ code that directly accesses
Android API and resources and, in many instances, inte-
grate with legacy native libraries and applications. Given
the architectural-specific nature of C/C++, developers
must recompile performance plane code to generate
multiple shared libraries for each platform/architecture
supported. Such shared libraries are combined with the
app Java bytecode in the .apk file for distribution. Unlike
portable Android Java apps, however, native Android apps
are specific to a given architecture. So a version of Skype™
built for AMD64/x86-based devices won’t run on ARM-
based Android devices, and vice versa.
To broaden coverage, developers will need to build and
upload different versions of the same app to the app store
(Figure 3). More than an inconvenience to the developers,
this has the potential of confusing end users and creating a
negative first impression. Picture Grandma having to figure
out if she has an AMD64 or MIPS-based tablet so that she
can download the right version of Skype to chat with Joey.
uNDK - The Universal Native Development Kit
The Universal NDK (uNDK) offers a practical and effective
solution to this challenge. Developers use uNDK the same
way as the NDK. But instead of generating and uploading
multiple apk files for each architecture, the uNDK gener-
ates a single Android apk file containing native code for all
popular CPUs targeting Android, including AMD64/x86,
ARM, and MIPS. The same application image uploaded to
the Android Market under one entry can be downloaded
and run on multiple platforms (Figure 4).
The simplicity of uNDK comes at a slight increase in the size,
since the unified Android apk will contain several native
shared libraries for different CPU architectures. These shall
not be overlooked, for two reasons. First, on Android devices
where internal storage is a premium, the overhead associated
with storing unused shared libraries in the may be problem-
atic. Second, bytes consumed for the downloading of such
extraneous libraries may be welcomed by hat-holding car-
riers, but disadvantageous to the consumers on bandwidth
diets. To help reduce storage space
associated with unused libraries,
the uNDK supplies an Android
utility called NDKstrip that can
be used to transparently strip
out such native libraries from
the application. It is envisaged
that this utility can easily be
integrated into customized ver-
sions of dexopt or the download
manager in order to make the
entire process completely trans-
parent to Grandma. To mitigate
the challenge with extra band-
width consumed for application
downloads, it is envisaged that
Figure 3: Traditional Method for Deploying Native Apps
18 Engineers’ Guide to Embedded Linux and Android 2013
NDKstrip can be integrated with download services from
application stores like Google Play. In this scenario, the
download services will strip and transmit the minimized app
to the requesting device, and can accelerate this process by
stripping the most frequently requested downloads ahead of
time for various platforms.
The uNDK in Action: And EBench
And EBench is a standard benchmark from the Embedded
Microprocessor Benchmark Consortium (EEMBC). Designed
to provide a standardized and industry-accepted method for
evaluating Android platform performance, And EBench pro-
duces two scores; one for native performance and the other
for Java performance. As such, And EBench is a mix-mode
application that is implemented in C/C++ and Java in order to
bench mark native and VM performance, respectively.
In one build, we created a single And EBench Android app
from source that incorporate all CPU architectures the uNDK
supports (AMD64/x86, ARM and MIPS). The same app is
then downloaded and run successfully on three different
handsets, each powered by different CPU architecture.
Conclusion and Afterthoughts
In this article, we marveled at the stunning success
of Android and iOS, and examined the app develop-
ment alternatives available to the smart-phone
programmers. We looked at the fragmentation chal-
lenges of Android, and implied significant technical
and financial advantages for the developers to strive
to achieve cross OS portability (WOSE) over cross-
Android platforms portability (WORA). We asserted,
based on empirical data, that mix-language, native-
code development is the inevitable future for Android app
development. As such, we introduced the use of uNDK as a
practical and low-risk approach to solving the performance
and architecture fragmentation challenge with Android.
Looking forward, we envisage the emerging popularity of Javas-
cript/HTML5 to serve as validation of the approach taken by
the uNDK. Furthermore, we expect to incorporate into uNDK
support for third-party proprietary and open-source libraries
that are presently outside of the Android sandbox.
Hieu Tran is the CTO and founder of Viosoft.
He is a graduate of the Tandem’s Guardian OS
team and holds a BS degree from UCLA. Hieu is
a frequent presenter to Fortune 100 companies
on development and debug techniques for em-
bedded Linux and Android applications.
Figure 4: Traditional Method for Deploying Native Apps
Figure 5: AndEBench on AMD64/x86, ARM and MIPS 19
Today’s new car buyers are including in-car infotainment
(IVI) systems as a determining factor on what vehicle to
purchase. Increasingly, car makers are hearing: “Make
the car’s infotainment system look and feel more like my
smartphone.” In the past, operating systems used for IVI
have varied from home-grown or proprietary, to some type
of Linux or Android derivative. It’s a given that a powerful
operating system capable of handling a wide range of
entertainment, navigation, telematics, and other vehicle
demands is needed. But what is the best choice? Linux?
Android? Or a combination of the two?
This article looks at some of the issues and trade-offs
between Linux and Android -
and what OS platform is best
suited for IVI. While Linux is
more established and proven
in a variety of industries,
Android has a strong fol-
lowing that is hard to ignore.
The Innovation Gap
A large gap exists today in the
rate of innovation when com-
paring automotive design
to smartphone technology.
Over the lifetime of a vehicle,
the mechanical components
are not likely to undergo any
significant change. The elec-
tric components may have a
few minor updates over the
vehicle’s lifetime, and there
may be the occasional software patch to Electronic Control
Units (ECUs) and infotainment systems, but for the most
part, the whole platform remains untouched for the life-
time of that vehicle. Compare that to the smartphone user
experience which sees several new platforms announced
each year, (as of this writing the latest Android release is
4.1 or “Jelly Bean”) and the continuous flow of download-
able and highly valuable software applications, or “apps.”
Given this innovation gap, car makers need to prepare
for a high rate of design change. The cost of keeping up
with this change has forced car makers and their tier-one
suppliers to look for a standardized and lower cost infotain-
ment solution that can be quickly adapted and updated.
According to the website, over 20 million
lines of code are used in the infotainment system of the
S-Class Mercedes-Benz today. It is projected by 2015 that
total software in a vehicle will grow to 300 million lines
of code. Software becomes a valuable and important part
of the car infrastructure. There has to be an efficient and
cost-effective way to keep components up to date.
Open Source Software And Automotive
Open source software is one way to innovate in IVI systems
while at the same time holding down costs. In the collaborative
environment of open source development, automotive manu-
facturers and their tier-one
suppliers – including their
competitors – agree to share
common elements of the IVI
software stack. Key proprietary
components remain with each
automotive manufacturer
where competitive differen-
tiation is needed. This is exactly
what’s happening with the
newly formed GENIVI Alliance,
a group of automotive makers
from across the globe com-
mitted to the broad adoption
of an open source IVI develop-
ment platform as it relates
specifically to IVI compliance.
Android On Consumer
Devices; Java Anyone?
For mobile consumer devices, tablets, and smartphones,
Android is winning out as the dominant operating system
with around 42 percent of the U.S. smartphone market in
2011...and growing. The numbers are equally impressive
globally (Figure 1). One reason for the widespread Android
adoption is the many Android apps now available. Further,
Android apps have emerged specifically to be used on a host
smartphone within the in-car environment. Maps, naviga-
tion, and “car-mode” (an app to automatically set up speaker
phone while driving) are available in Android 2.0 releases
and later. These many apps help to deliver a solid “Android-
in-the-car” automotive experience.
Life in the Fast Lane: Linux or Android
for Automotive Infotainment Systems?
Linux is probably in a better position to achieve widespread adoption, while Android
will continue to make things interesting in the “connected car” segment.
By Andrew Patterson, Mentor Graphics
Some developers are
questioning why they
should even bother – why
not just dock the Android
smartphone into the
vehicle’s dashboard and
what’s needed will be
provided via the phone?
20 Engineers’ Guide to Embedded Linux and Android 2013
So the popularity of Android has a lot to do with the vast
array of available apps. According to AOL Tech, so far in 2012,
the download rate for Android apps is 1.5 billion installs per
month, with a total of nearly 20 billion installs to date. The
Java development language is also widely used and freely
available, so there too is a strong community of developers.
The portability of Java apps could also turn into an Android
weakness – there is nothing to stop these apps from being
ported and made available on a different platform.
Are there business risks in adopting Android? Some devel-
opers are nervous about the omnipresence of Google – as
Google is the sole provider and owner of the Android oper-
ating system platform. Because the release schedule and
content of Android is managed by Google, many automotive
software developers are uncomfortable with being depen-
dent on Google. What would happen if the license or terms
of use suddenly changed? The original Android operating
system was designed exclusively for mobile smartphone
applications and as a result, Android has to be modified to
handle the wide variety of audio streams in the vehicle with
signals coming from reversing sensors, radio, DVD player,
navigation, phone, and external sources. The technical chal-
lenges can be worked out, but require significant effort. The
middleware in Android that covers audio stream routing has
proved difficult to modify and re-test – the intended info-
tainment system has to link in at several points including
the audio flinger (mixer providing a single output at a speci-
fied sample rate), underlying audio hardware, and the audio
manager. Some developers are questioning why they should
even bother – why not just dock the Android smartphone
into the vehicle’s dashboard and what’s needed will be pro-
vided via the phone?
The Widespread Acceptance Of Linux
Linux is the largest collaborative software development
of all time with over 8,000 developers from 800 different
companies having been involved since 2005. Linux has
proven itself to be a safe and secure operating system
serving nearly every market segment. In the smartphone
market, the Linux Foundation estimates that 850,000 new
Linux-based mobile devices are registered every day. It’s
also worth noting that 90 percent of all financial trans-
actions on Wall Street are carried out on Linux servers.
Figure 1: The meteoric rise of the Android OS platform. By mid-2011, over 50 percent of all smartphones sold globally were based on the Android OS.
(Source: Smartmo – Based on Gartner actuals and IDC forecast) 21
Further, Linux is now embedded in nearly every TV set
sold today, and powers the servers of popular websites like
Amazon, Facebook, Google, and Twitter. Because it is not
owned by a single dominant company, it does not get the
attention it deserves.
Linux has evolved into over 100 variants, aimed at commer-
cial and private users. The most popular variant is Ubuntu
(Source: Hub Pages, June 2012), and the commercial dis-
tributor Canonical claims this will ship on five percent of all
PCs sold in 2013. It is not surprising
that GENIVI and the automotive
community selected an open source
Linux as the base operating system
for Infotainment. Several other
GENIVI members, including Mentor
Graphics, have adapted their Linux
software products to be GENIVI 2.0
It can be argued that both Linux and
Android have strengths for vehicle
infotainment solutions. But when
it’s time to build the most compre-
hensive IVI system, Linux shows the
most potential. Why? There are at
least three critical factors:
accepted in many demanding
computing environments. Most of
Android’s success has been in the
consumer/mobile marketplace.
for example, by adding a hypervisor
layer, Android can be deployed as
a client operating system, making
available some of Android’s con-
sumer benefits. Other run-time
environments can also be added.
nizing Linux as the one OS that
can handle both the core elements
of infotainment platforms, such
as graphics and multicore sup-
port, but also new features needed
for connected cars: anti-collision
technology, voice-activated com-
mands, gesture recognition, and the
future requirements of in-vehicle
Even with the above Linux
strengths, it is worth considering
how both Linux and Android can
be utilized together to strengthen
the overall effectiveness and performance of an IVI system
design. Two such scenarios include:
-Multiple OS architectures:
Implementing a virtualization layer is an elegant way of
allowing Linux and Android to run together on a single hard-
ware platform (Figure 2). Each operating system runs on a
dedicated virtual machine and the hardware resources avail-
able are shared. Communication is allowed in a controlled
Figure 2: Typical implementation architecure with virtualization layer supporting both Android and Linux.
Figure 3: Android OS and apps inside a Linux Container add security by controlling Android access to
onboard systems.
22 Engineers’ Guide to Embedded Linux and Android 2013
manner between the different operating domains, allowing
some data to be shared between applications. Privileges can
also be set so that the resources are managed and access is
denied to some system functions. This allows a safe and secure
domain where un-trusted or uncertified applications might be
downloaded into the infotainment system.
The underlying hardware platforms are rapidly developing to
support this type of virtualized application. Semiconductor
suppliers are including third-party specialized graphics accel-
erators, multiple CPU cores, and adding networking standards
such as CAN, MOST, FlexRay, and AVB used by automotive soft-
ware designers. Some virtualization implementations include
diagnostics and support that help manage the system load,
optimizing the available CPU power for each host application.
-Linux Containers:
Another implementation option is to have the Android operating
system hosted on top of Linux in a “Linux Container” (Figure 3).
The resources, access control, and security of the Android client
are managed by the host Linux
operating system. In the future,
this Android client could form
a key part of an infotainment
system – and even become a
smartphone slave/client. In
this way a familiar UI and set
of Android apps appear on the
vehicle infotainment system
when the driver’s smartphone
comes into range.
Combining IVI With
Vehicle Critical Func-
Safety and reliability are top priorities for any car maker and
inclusion of open source software together with complex elec-
tronics presents a new level of risk. A modern luxury vehicle
already has around 70 to 100 software-based ECUs, and up
until now, these ECUs have been kept separate from infotain-
ment systems. The risk of interference was just too great.
However, as manufacturers look to further cut costs, the con-
cept of combining several functions on a single, high-power
hardware platform is becoming more attractive. For example, a
reversing sensor may need to communicate with the infotain-
ment system to generate audible warnings to the driver, and
software components supporting this could logically co-exist
within the infotainment system. As the instrument cluster
(speedometer, fuel gauge, and so on) becomes increasingly
software-based, this could also be supported by the Linux (or
Android) operating system.
Consolidation will need to be done carefully, so that critical
functions can start up quickly and run reliably – regard-
less of what’s happening in the infotainment system.
Building the IVI System
Mentor Graphics offers a full suite of design tools and services to
allow fast implementation of open source IVI solutions. At the heart
of Mentor’s IVI solution is Mentor Embedded Linux: a reliable,
proven, and flexible Linux operating system. It integrates graphics,
communications, and multimedia middleware (including compo-
nents for connectivity, audio, speech, positioning, networking and
security) with libraries, system infrastructure and management
components – all operating on top the Linux kernel and available
on popular processor architectures such as Intel® Atom™, ARM
Cortex-A8, and ARM Cortex-A9. This environment can be used for
developing both Linux and Android architectures and Apps.
In 2009, the GENIVI Alliance was established to address a Linux
variant specifically designed and purposed for in-vehicle infotain-
ment systems. Today, with its 2.0 release, GENIVI Linux offers a
standard platform for developing high functionality infotainment
systems. However, there is still a long way to go as many of the
innovations around the Linux mobile office and connected car have
yet to filter down to the GENIVI
Linux platform.
Nonetheless, innovative and
widespread technologies behind
Linux and Android have fueled
an explosion of features and
functionality that are now
available in vehicle infotain-
ment systems. Both operating
systems have a place in today’s
automotive software ecosystem.
Linux is probably in a better
position to achieve widespread
adoption, while Android will
continue to make things interesting in the “connected car” seg-
ment. No doubt we will see new and exciting ways of achieving
connectivity not only between consumers and their automobiles,
but between automobiles/consumers and the cloud, and connec-
tivity between automobiles as well.
Andrew Patterson is business development director for
the Mentor Graphics Embedded Software Division,
specializing in the automotive market. Prior to Mentor,
Andrew spent over 20 years in the design automation
market specializing in a wide range of technologies in-
cluding wire harness design, automotive simulation
model development, virtual prototyping, and mechatronics. Currently,
he is focused on working with the GENIVI industry alliance, and leading
Mentor’s infotainment and in-vehicle electronic cluster and telematic
solutions. Andrew holds a masters degree in Engineering and Electrical
Sciences from Cambridge University, UK.
There is still a long way to go
as many of the innovations
around the Linux mobile office
and connected car have yet
to filter down to the GENIVI
Linux platform. 23
Virtualization is a thriving technology proven to be successful
in enterprise IT such as data centers and cloud computing.
However, technology vendors have only scratched the surface
on providing virtualization-based solutions, leaving untapped
opportunities in industries beyond IT, specifically in the
security-critical and safety-critical markets. A major tech
producing industry that has yet to fully seize the expansive
opportunities of virtualization is the embedded computing
world, which serves a wide set of markets from defense sys-
tems to biomedical devices. This slower adoption is due to the
underlying technology of virtualization – the hypervisor. Up
until now, hypervisors were primarily designed to serve the
popular demands of enterprise IT, focused to run in IT server
and desktop environments. As a result, these enterprise IT
hypervisors do not support the strict properties commonly
needed in embedded designs such as low power, small size, and
determinism. However, as security in these embedded devices
becomes a significant concern, the possibly of using virtu-
alization to achieve security in embedded devices is gaining
momentum in the embedded market.
This article identifies unique security and reliability capabili-
ties hypervisors have to offer to the embedded community
and how the new Type Zero Hypervisor is able to deliver
these capabilities with its unique architecture.
Hypervisors for IT Infrastructure
The “hypervisor” is software that creates an abstraction layer
between hardware and operating systems, serving as the
underlying technology of computer virtualization. Hypervi-
sors achieve this layer of abstraction by taking full control over
the physical computing platform to create software “virtual”
hardware platforms that emulate the underlying hardware
(Figure 1). These emulated platforms then allow operating
systems, referred to as guest OSs, to run on the emulated
platform instead of on the physical hardware. The emulated
platforms can be replicated multiple times to support multiple
guest OSs on the same machine, and can also be transferred to
other hypervisor enabled machines.
Today, hypervisors are most commonly deployed on IT servers
and PCs to take advantage of multi-guest OS operation,
which reduces the cost of maintaining multiple platforms
and combines the capabilities offered by multiple flavors of
OSs on a single platform. Hypervisors used in IT fit into two
commonly designated architectures, type 2 and type 1:
Type 2 hypervisors run as applications on top of a general
purpose OSs such as Windows or Mac OS. Type 2 hypervisors
are commonly deployed to run user programs designed for
OSs on a machine running a different OS; for example, run-
ning Windows applications on a Mac.
Type 1, also referred to as bare metal, is a single software
hypervisor package that runs directly on hardware. The
software packages in today’s IT type 1 hypervisors include
a hypervisor integrated, or paired, with a special purpose
host operating system and additional applications to support
features needed by the enterprise IT market.
Existing type 2 and type 1 hypervisors are
unsuited for use in embedded systems because
they include a significant amount of unneces-
sary functionality that can greatly impact the
size, security, and performance of an embedded
system design.
Embedded Hypervisors – Going
Beyond IT
Hypervisors, if designed correctly, can offer
benefits for embedded devices, and provide
capabilities that are not offered by today’s enter-
prise hypervisors. The hypervisor’s full control
over the hardware platform and ability to vir-
tualize hardware platforms can be used to build
Type Zero Hypervisor – the New
Frontier in Embedded Virtualization
The hypervisor’s full control over the hardware platform and ability to virtualize
hardware platforms are beneficial in environments that face high security threats
and demand high reliability.
By Will Keegan and Arun Subbarao, LynuxWorks, Inc.
Figure 1: Hypervisor
24 Engineers’ Guide to Embedded Linux and Android 2013
advanced solutions to solve major problems in environments
that face a high security threat and demand high reliability.
Some of the major security and reliability use cases offered
by hypervisors are listed below:
t Security Domain Isolation - The hypervisor’s full control
over the hardware platform has the ability to isolate access
to hardware resources to create separate computing environ-
ments for guest OSs that prohibit unauthorized information
flow between security domains. Security domain isolation
is extremely useful in tactical defense systems deployed on
size, weight, and power (SWaP) restricted platforms, such
as Humvees and aircraft, that currently require multiple
computing platforms to process separate levels of classified
data. With a hypervisor a single computing platform can be
used to process multiple levels of classified data while main-
taining separation between the security domains (Figure 2).
t Independent Measurement - In safety-critical envi-
ronments, systems are commonly built with redundant
components and system health monitors to detect the event
of a component failure and recover operation with redundant
components. Hypervisors can create independent com-
puting environments that allow mission-critical functions
to run without the interference of co-existing applications
or complex dependencies of full operating systems. Using
a hypervisor, a single computing node can run a system
application in one virtual environment and an indepen-
dent health monitor in a separate environment to measure
the status of the application (Figure 3). In the event of an
application error the health monitor has the opportunity to
locally reset the application or direct a failover procedure for
quicker response time and smarter fault-tolerant designs.
t Reference Monitoring - Both safety-critical and security-
critical system computing nodes rely on data channel
interfaces for either local storage or intersystem communi-
cation. A compromise in the integrity or authenticity of data
transferred over communication channels can compromise
the security and availability of the entire system. Hypervi-
sors can provide the ability to independently mediate access
and monitor information flow between applications and
data channel interfaces to insure all information flow is un-
tampered and always authorized to maintain operation.
These hypervisor security and reliability use cases face two
major technical challenges: 1) Having a security foundation
that hosts independent computing domains and controls
information flow between guest OSs, critical functions,
and system resources. 2) Availability of a hypervisor that
addresses the needs of embedded platforms. These chal-
lenges by themselves are hard to satisfy with today’s existing
solutions. Trying to satisfy both requires a new design.
The “Type Zero” hypervisor architecture, designed by Lynux-
Works from the ground-up to operate in safety-critical and
security-critical environments while meeting the stringent
demands of embedded computing platforms, fully satisfies
the requirements of these and many other use cases.
Introducing “Type Zero”
“Type Zero” is a new bare-metal architecture, designed by
LynuxWorks, that differentiates from type 1 by removing
Figure 2: Hypervisor Security Domain Isolation
Figure 3: Hypervisor Independent Measurement
Figure 4: Hypervisor Reference Monitor 25
the all un-needed functionality from the “security sensitive”
hypervisor mode yet virtualizes guest operating systems in
a tiny stand-alone package. By shedding the need of support
by a full operating system, the Type Zero hypervisor drasti-
cally reduces the size and computational overhead imposed
on target embedded systems. Figure 5 shows a comparison in
size between type 2, type 1, and Type Zero architectures, indi-
cating that the majority of code size in the type 2 and type 1
hypervisors is attributed to the underlying host or helper OS.
Small size is one of many hypervisor design aspects needed
by embedded systems. In order for hypervisors to operate in
embedded mission critical systems, major architectural design
considerations must be addressed to ensure key embedded,
security, and reliability requirements are recognized and accom-
modated. The following properties are identified as key hypervisor
architecture requirements for embedded virtualization systems
for use in safety-critical and security-critical environments:
t Minimal Size - Embedded systems are commonly faced
with limiting storage and memory restrictions. Embedded
solutions utilizing virtualization technology must consider
both the footprint of the guest OS and the foot print of
the supporting hypervisor. Typical embedded hypervisors
consume less than 512 KB of storage and less than 4MB of
system RAM. In contrast, today’s available type 1 hypervi-
sors require storage footprints from hundreds of megabytes
to several gigabytes before adding guest OS images, and
consume several hundreds of megabytes to nearly a gigabyte
of RAM. The base storage and memory footprint of type 1
hypervisors range from tens to thousands times larger than
the demands of traditional embedded OSs which may well
exceed the size restrictions on an embedded platform.
t Maximum Efficiency - Efficiency is very important for
embedded solutions that have demanding throughput speci-
fications or must operate in power-conscious devices with
very limited processing capabilities. In order to maximize
efficiency, hypervisors must only contain the functionality
that is necessary and sufficient to serve the guest OS & its
applications. Type 1 hypervisors, for example, depend on
the underlying support of a closed operating system, which
may consume unnecessary CPU cycles outside the control of
the embedded system architect.
t Determinism - Embedded systems often rely on the ability
to guarantee the time of execution for all system operations.
Having control over the timeliness of system operations
allows architects to construct solutions that ensure the
proper behavior of mission-critical functions and overall
system availability. The biggest impact hypervisors have on
determinism is the scheduler used to assign guest OSs CPU
processing cycles. In order to perform any function that
requires deterministic behavior in a virtualized environ-
ment, architects must have full control over the hypervisor
scheduler to guarantee that critical functions are scheduled
to execute on time, and to ensure that other low priority
operations do not interfere with critical processes. Type 1
hypervisors utilize a dynamic CPU scheduler that deter-
mines the order of execution of guest OSs on CPU based
on guest OS throughput demand. Dynamic CPU schedulers
take control of execution from the system architect and pass
Figure 5: Hypervisor Size Comparison Chart
26 Engineers’ Guide to Embedded Linux and Android 2013
it to the guest applications, which invariably get exploited
by rogue applications for DDoS attacks.
t Security - Security is the most important property of
a hypervisor running in high threat environments. The
hypervisor is privileged software responsible for orches-
trating the simultaneous execution of guest OSs while
protecting each guest OS’s integrity, confidentiality, and
availability. All code running in the hypervisor has a direct
impact the on overall security, reliability, and determinism
of a hypervisor-enabled platform. Any unauthorized
access or control over the hypervisor can be devastating
for embedded solutions targeted for operation in safety or
security-critical environments. The best way to strengthen
the security of a hypervisor, or any system, is to limit the
access components have over privileged resources and to
reduce the complexity of the design. Type 1 hypervisors
that rely on host OSs include complex privileged compo-
nents like device drivers, and I/O stacks. This creates a
situation which makes it very difficult to verify that the
code in these components do not possess an exploitable
flaw to gain unauthorized access to the hypervisor.
t Reliability - Reliability is the most important property
for safety-critical systems. Many factors contribute to the
reliability of a hypervisor, including, design complexity,
determinism, and foundational security. Type 1 hypervisors
are heavily tested to maintain operation, but the reliance on a
full operating system does introduce significant risk through
complexities in core components such as: dynamic process
scheduling, full process model, dynamic memory manage-
ment, file systems, I/O stacks, and third party device drivers.
Any flaw in these components can cause system failure.
t Flexibility - Any foundational technology used in
embedded systems requires flexibility for architects to mold
the technology to fit their specific system designs. Although
hypervisors are mainly marketed for their ability to host
multiple OSs, the hypervisor’s control over the physical
hardware can provide capabilities that go beyond emulating
computer platforms. Type 1 hypervisors provide a limiting
user model that conforms to enterprise IT use cases.
LynuxWorks’ LynxSecure Type Zero hypervisor exemplifies
these architectural principles to ensure that key embedded
mission-critical requirements can be realized using virtual-
ization, as discussed in detail in the next section.
LynxSecure - Type Zero Hypervisor Archi-
The design goal of the LynxSecure Type Zero hypervisor
architecture is to provide a secure and reliable foundation for
virtualization platforms to serve a broad array of computing
environments from embedded to enterprise systems. This
objective of providing a secure foundation with the features
to serve an expansive market poses a common paradox found
in architecture design. A secure and reliable foundation
demands a small and simple code base, but offering broad
functionality increases complexity which can compromise
size and security. LynxSecure’s Type Zero architecture solves
this problem of by establishing a foundational core needed
by all virtualization markets while providing an external
configuration framework that allows for many unique vir-
tualization solutions to be constructed, without imposing
unnecessary code bloat in the hypervisor core.
LynxSecure - Type Zero Hypervisor Core
The core foundation of the Type Zero hypervisor establishes a
baseline set of functionality to support a virtualization frame-
work that will enable system architects to build virtualization
solutions for any market. The key to supporting this framework
is selecting the minimal set of components needed maintain
a secure, reliable, and efficient foundation for all forms Type
Zero hypervisor deployments. The following set of functional
components is implemented to comprise the LynxSecure Type
Zero hypervisor core foundation (Figure 6):
t Real-time Virtual CPU (RTvCPU) Scheduler - The real-
time virtual CPU scheduler orchestrates the execution of
general guest OSs, real-time guest OS, and bare-metal appli-
cations) on the hardware CPU cores. The real-time scheduler
gives system architects the flexibility to control execution
scheduling on multiple, dedicated, or shared CPU cores with
clock-tick precision to host real-time OSs and applications.
The virtual CPU scheduler utilizes Intel VT-x to allow guest
OSs to run directly on the CPU cores, reducing significant
software complexity and computational overhead. Without
VT-x, hypervisors require additional software support to
emulate the CPU for proper guest OS execution.
t Memory Manager - The memory manager allocates the
memory for each guest OS and is responsible for pro-
tecting the integrity and confidentiality of the information
stored and processed by each of the co-existing guests.
Protecting the integrity and confidentiality of each guest
OS is extremely important for solutions that require secu-
rity domain separation between guest OSs. The memory
manager also controls shared memory structures for
intercommunication between guest OSs, bare-metal appli-
cations, virtual devices, para-virtual devices, and physical
devices. The memory manager’s role in fully protecting
guest OS memory from unauthorized access is broken into
two categories: protecting unauthorized access to guest OS
memory from co-existing guest OSs, and protecting guest
OS memory from external I/O devices.
The memory manager is able to protect against unauthor-
ized access requests originated from guest OSs, however the
memory manager must rely on Intel’s hardware VT-d to explic-
itly control the boundaries of memory read and write requests
originating from external devices. In addition to VT-d, the
memory manager benefits from Intel’s recent extended page
table (EPT) hardware feature. Using EPT, guest OSs are able
to directly manage their local memory page tables, no longer 27
requiring assistance from the hypervisor which removes a sig-
nificant bottleneck in guest OS memory access performance.
t Hypercall API - The Hypercall API is a privileged hyper-
visor interface utilized by the virtualization framework to
provide guest OSs and bare-metal applications a facility for
inter-guest communication, guest OS management, audit,
and maintenance management.
t Interrupt Handler - The interrupt handler manages inter-
rupt signal routing for efficient asymmetric communication
channels between guest OSs, bare-metal applications, vir-
tual devices, para-virtual devices, and physical devices.
t Exception Handler - The exception handler manages illegal
or privileged guest OS operations to ensure all system
operations do not subvert the availability, integrity, and
confidentiality protections provided by the hypervisor.
t Security Monitor - The security monitor is responsible for
bringing the hypervisor into a secure state and continuously
monitors security critical hardware resources to maintain a
secure operational state. The security monitor relies on the
Intel TXT feature set during the startup initialization process.
Prior to loading the hypervisor, the hardware trusted plat-
form module (TPM) is controlled via Intel’s TXT instruction
set to validate the Type Zero hypervisor is not compromised
and is ready to enter full operational state.
t System Audit - The system audit component is an advanced
service for recording major security, safety, or user defined
system events that can be passed up to guest OSs or bare-
metal applications to build robust fault detection, threat
detection, and system recovery sub-systems.
LynxSecure’s Type Zero hypervisor core design satisfies the
size, efficiency, determinism, security, and reliability require-
ments of embedded mission-critical systems, while leaving
the need for flexibility up to the higher level virtualization
framework. By selecting a minimum set of functionality
and utilizing Intel’s hardware assistance, the size and com-
plexity of the core components are drastically reduced to
assure vital security and reliability logic is correct, while the
software computational overhead is
minimized to improve latency for a
stronger deterministic behavior.
Virtualization is a powerful tech-
nology that is changing the way
organizations of all shapes and sizes
do business through the greatly
offered cost saving and security
benefits. Up until now, however,
virtualization has been confined to
IT server and PC environments leaving a world of untapped
opportunity for technology producers to explore. With the
help from advancements in hardware assisted virtualization
features from chip vendors like Intel, combined with the
vision from embedded RTOS company, LynuxWorks, the Type
Zero hypervisor emerges to give the embedded community the
tools they need to deliver the benefits of virtualization beyond
the realm of enterprise IT, into new industries with the most
demanding security and reliability requirements.
Will Keegan is a technical specialist at LynuxWorks,
Inc., where he upholds a strategic role in supporting
sales, marketing, and engineering. He has over 7 years
of experience working in enterprise IT, safety-critical,
and security-critical industries. He previously served
as a product engineer for OIS where he worked on the
development and marketing of various high assurance cryptographic
network and embedded middleware products. Will also served as a
network engineer for USAA, building and maintaining world class
data centers. He graduated from the University of Texas at Austin in
2005, earning a B.S. in Computer Science.
Arun Subbarao is Vice President of Engineering
at LynuxWorks, responsible for the development
of security, virtualization and operating-system
products, as well as consulting services. He has 20
years of experience in the software industry working
on security, virtualization, operating systems and
networking technologies. In this role, he spearhead-
ed the development of the award-winning LynxSecure separation
kernel and hypervisor product as well as software innovation in
the areas of security, safety and virtualization. He has also been a
panelist and presenter at several industry conferences. He holds a
BS in Computer Science from India, MS in Computer Science from
SUNY Albany and an MBA from Santa Clara University.
Figure 6: LynxSecure Type Zero Hypervisor Core
28 Engineers’ Guide to Embedded Linux and Android 2013
During the last 20 years, malware has evolved from occa-
sional “exploits” to a global multimillion-dollar criminal
industry.1 We hear about viruses such as Flame and
Stuxnet, which can infect whole country infrastructures
with relative ease. It seems to be getting simpler for
hackers and malware to breach private companies and
government agencies alike. For example, for at least two
years, Flame has been copying documents and recording
audio, keystrokes, network traffic and taking screenshots
from infected computers. And passing all the information
to servers operated by its creators.2 If it’s that easy to
attack governments and infrastructures, how difficult do
you think it is to hack a smartphone?
In network security, perimeter-based and scanning tech-
niques are penetrated and
circumvented with alarming
regularity. This has resulted
in the more widespread use
of application layer security
technologies, which are now
considered to be a critical com-
ponent for security engineers
who have come to realize how
important in-depth defense
techniques are in the current
threat landscape.
A PC currently can expect between 40 and 200 minutes of
freedom before an automated probe reaches it to determine
whether it can be penetrated.3 This just shows how little
time one needs to be connected to the Internet – wireless or
not – before it’s touched and potentially hacked. If you think
that PCs aren’t very secure, the smartphone (with little to
no security in the apps or on the phone itself) is even less so.
And, of course, the latest trend is custom malware for
attacking smartphones.
Custom Malware Designed for Smartphones
Application providers need to step up and begin building
in sufficient security for mobile devices, including vulner-
ability mitigation, re-evaluation of trust and incorporation
of secure authentication channels.
The need for these techniques is magnified on mobile plat-
forms and perhaps none more so than on Android. A recent
study by AV-TEST showed that more than 75 percent of
anti-malware solutions ignored at least one in every 10 of
the main families of malware in the wild.4 Add to this that
Android malware is increasing dramatically, quadrupling
between 2011 and 20125, and it seems that failing to protect
mobile applications in general, and Android applications in
particular, might be inviting a disaster.
The open source nature of the Android platform means that
there are a plethora of free, widely available and powerful tools.
While these have legitimate
uses, they also make it simple
to reverse-engineer unprotected
applications or even elements
of the OS itself, in order to
assess vulnerabilities and create
attacks. Add to this the fact that
there are a wide range of largely
unpoliced Android marketplaces
where practically any applica-
tion can be uploaded, making it
unsurprising that the security
situation has been likened to
the Wild West. Even Google’s own marketplace and its use of
its ‘Bouncer’ malware detection system is far from infallible, as
researchers recently showed.6
Mobile Security Critical for Businesses
With the huge growth of smartphones and the applications
that run on them, mobile security is becoming a critical area
for all businesses. The sheer volume of commercially sensi-
tive, personal employee and other key data both stored in and
transmitted via these devices, makes them an attractive target
for hackers. They also are an obvious route for threats that
seek to penetrate the back office to corrupt data, capture it, or
maliciously alter software through mobile application attacks.
Needed: Self-Protecting, Security-
Aware Mobile Applications with
Anti-Tamper Technology
Application providers need to step up and begin building in sufficient security
for mobile devices, including vulnerability mitigation, re-evaluation of trust and
incorporation of secure authentication channels.
By Andrew McLennan, Metaforic
Custom malware attacks
on Android applications are
increasing exponentially and
theft of software, data and
content is rising to match. 29
Unfortunately, to date, security in Android has been inef-
fective. Custom malware attacks on Android applications
are increasing exponentially and theft of software, data and
content is rising to match. Hackers create and input malware
that can change the behavior of applications, substitute
account numbers, modify amounts, initiate egregious
transactions, capture PINs, passcodes and more. Applica-
tions running on remote
devices, with unknown
configurations, need to be
able to defend themselves,
their communication, and
to clearly signal if they have
been compromised.
Apple’s iOS is not an imper-
vious walled garden that
many would have you believe
either. A number of malicious applications have been
removed from the App Store and Russian malware was
recently pulled after managing to pass through Apple’s
normal protections around their market.7
Approaches to Secure Mobile Devices
There are various means to secure mobile device trans-
actions. Strong security for mobile devices offers a
comprehensive portfolio of embedded security solutions;
the most obvious being anti-tamper technology, to prevent
code and data changes. Anti-tamper is the most significant
development in information security since the advent of
the firewall and is perhaps the most advanced item in the
security professional’s toolkit. The principle behind anti-
tamper is quite simple: rather than relying on the security
of the environment (by making the assumption that fire-
walls and virus checkers are installed, correctly configured
and updated) anti-tamper ensures that the application can
defend itself and its own data.
Clearly this approach will become the standard method for
securing applications in the next few years as it is obvious
that traditional approaches to security are now insufficient.
‘Defense-in-depth’ is now required for any applications that
need to ensure the integrity of their operation.
There are numerous ways anti-tamper technology can help
secure smartphone apps for financial transactions:
1) Protect the application itself against subversion. If it is
possible to alter the application’s operation, any security
methods inherent in it are open to trivial attack; data valida-
tion can be avoided, transactions can be altered or rerouted,
data can be captured, and routines can be called at will to
have previously unintended consequences.
2) Protect application data. In the same way as application code
can be prevented from alteration, its data can be protected.
3) Protect data and keys within the application from capture
or extraction by using cryptographic primitives, which
prevent malware from being able to access the values
of keys and other sensitive information by not holding
them ‘ in the clear’ in memory but instead by holding their
values programmatically/algorithmically (e.g., to ensure
bank account details are not captured and stolen).
4) Prevent ‘code lifting’ to
extract individual functional-
ities (e.g., hackers might wish
to use a code fragment that
signs data with a key to sign
some of their own data for a
Man-In-The-Middle attack to
reroute a payment transaction
to a bogus account).
5) Trigger a response. Once an application is protected against
subversion, any detection of an application level attack can
trigger a response. While that may typically be as simple as
alerting the user to a problem and exiting the application, anti-
tamper technology typically allows custom responses; e.g.;
sending a message to a server, perhaps to blacklist a device on
which a compromise attempt has been made at the server-side.
6) Repair attacked applications or data. Should even one
bit of an application or its data be altered and this be
detected, the technology is available to repair the damage
in order that the application may still be used.
As malware continues to attack smartphones, financial
institutions must strive to provide the needed security to
their applications. Malware won’t go away and companies
need to be more proactive in securing apps from the inside
out using anti-tamper technologies to produce that added
level of security. We all know firewalls alone aren’t enough.
Andrew McLennanis an experienced entrepreneur who
has founded five start-up companies since 1993, includ-
ing Metaforic. Andrew has held all the key management
roles in startups including CEO, CMO, CCO and COO.
Andrew has an honors degree from Strathclyde Univer-
sity in mechanical engineering with aerodynamics.
Apple’s iOS is not an
impervious walled garden
that many would have you
believe either.
30 Engineers’ Guide to Embedded Linux and Android 2013
Consolidating Packet Forwarding
Services with Data-Plane
Development Software
Consolidating all three planes to a single ATCA blade is now possible.
By Jack Lin, Yunxia Guo, and Xiang Li, ADLINK
In recent years, there has been a market and technology
trend towards the convergence of network infrastructure to
a common platform or modular components that support
multiple network elements and functions, such as applica-
tion processing, control processing, packet processing and
signal processing. In addition to cost savings and reduced
time-to-market, this approach provides the flexibility of
modularity and the ability to independently upgrade system
components where and when needed, using a common plat-
form or modular components in shelf systems and networks
of varying sizes. In traditional networks, switching modules
would be used to route traffic between in-band system mod-
ules and out-of-band systems; processor modules used for
applications and control-plane functions; packet processing
modules used for data-plane
functions; and DSP mod-
ules used for specialized
signal-plane functions. Four
different types.
Enhancements to processor
architecture and the avail-
ability of new software
development tools are
enabling developers to use a
single blade architecture for consolidation of all their appli-
cation, control and packet-processing workloads. Huge
performance boosts achieved by this hardware/software
combination are making the processor blade architecture
increasingly viable as a packet-processing solution. To
illustrate this evolution, we developed a series of tests to
verify that an AdvancedTCA processor blade combined
with a data-plane development kit (DPDK) supplied by the
CPU manufacturer can provide the required performance
and consolidate IP forwarding services using a single plat-
form. In summary, we compared the Layer3 forwarding
performance of an ATCA blade using native Linux IP
forwarding without any additional optimization from
software with that obtained using the DPDK. We then
analyzed the reasons behind the gains in IP forwarding
performance achieved using the DPDK. [Editor’s note:
DPDK is an Intel product.]
AdvancedTCA Processor Blade
The ATCA blade used in this study is a highly integrated
processor blade with dual x86 processors, each with 8 cores
(16 threads) and supporting eight channels of DDR3-1600
VLP RDIMM for a maximum system memory capacity of
64GB per processor. Network I/O features include two
10Gigabit Ethernet ports (XAUI, 10GBase-KX4) compliant
with PICMG 3.1 option 1/9, and up to six Gigabit Ethernet
10/100/1000BASE-T ports to the front panel. The detailed
architecture of the ATCA blade is illustrated in the func-
tional block diagram in Figure 1.
Data-Plane Development Kit
The data plane development kit provides a lightweight
run-time environment for
x86 architecture processors,
offering low overhead and
run-to-completion mode to
maximize packet-processing
performance. The environ-
ment provides a rich selection
of optimized and efficient
libraries, also known as the
environment abstraction layer
(EAL), which are responsible
for initializing and allocating low-level resources, hiding the
environment specifics from the applications and libraries,
and gaining access to the low-level resources such as memory
space, PCI devices, timers and consoles.
The EAL provides an optimized poll mode driver (PMD);
memory & buffer management;and timer, debug and
packet-handling APIs, some of which may also be provided
by the Linux OS. To facilitate interaction with application
layers, the EAL, together with standard the GNU C Library
(GLIBC), provide full APIs for integration with higher level
applications. The software hierarchy is shown in Figure 2.
Test Topology
In order to measure the speed at which the ATCA processor
blade can process and forward IP packets at the Layer3 level,
we used the following test environment shown in Figure 3.
Running the DPDK provides
almost 6x the IP forwarding
performance compared to
native Linux. 31
Two ATCA switch blades with networking software pro-
vided non-blocking interconnection switches for the
10GbE Fabric and 1GbE Base Interface channels of all
three processor blades in the ATCA shelf, which supports
a full-mesh topology. Therefore, each switch blade can
provide at least one Fabric and Base interface connection
to each processor blade. A test system, compliant with
RFC2544 for throughput benchmarking, was used as a
packet simulator to send IP packets with different frame
sizes and collect the final statistical data, such as frames
per second and throughput.
As shown in the topology of the test environment in Figure 3,
the ATCA processor blade (device under test: DUT) has four
Gigabit Ethernet interfaces: two directly from the front panel
(Flow1 and Flow2), and another two from the Base Interfaces
(Flow3 and Flow4) via the DUT’s Base switches. In addition
to these four 1GbE interfaces, the DUT has two 10GbE inter-
faces connected to the test system via the switch blade.
Xeon E52648L
IPMB 0/1
QPI 8.0
SAS x3, USB x3, COM, PCIE x8,
SerDes x2, SATA x2,
PCIE x4
QPI 8.0
Xeon E52648L
x4 DMI 2.0
Intel C604 PCH
SAS x3, USB x3
PCIE x4
PCIE x8
Fabric Riser Card
PCIE x4
2.5" SATA
PCIE x4
MidSize AMC
AMC.1 T4
AMC.2 E2
AMC.3 S2
PCIE x8
Silicon Motion
PCIE x1
USB x3
Port 1
Port 2
Port 3
Port 4
Riser Card
AMC.2 E2
Cave Creek
PCIE x16
PCIE x4
Libc (GLIBC)
Linux Kernel
memory alloc
Contiguous or
DMA memory alloc
PCI configurations,
Scan, and I/O
Specific UIO driver
Specific UIO driver
Figure 1: ADLINK aTCA-6200 functional block diagram used for the performance study.
Figure 2: EAL and GLIBC in Linux application environment
32 Engineers’ Guide to Embedded Linux and Android 2013
In our test environment, the DUT was responsible for
receiving IPv4 packets from the test system, processing these
packets at the Layer3 level (e.g., packet de-encapsulation,
IPv4 header checksum validation, route table look-up and
packet encapsulation), then finally sending the packets back
to the test system according to the routing table look-up
result. All six flows are bi-directional: for example, the test
system sends frames from Interface 1/2/3/4/5/6 to the DUT
and receives frames via Interface 2/1/4/3/6/5, respectively.
Test Methodology
To evaluate how the DPDK consolidates packet-forwarding
services on the processor blade, an IP forwarding application
based on the DPDK was used in the following two test cases:
Performance with native Linux
In this test, UbuntuServer 11.10 64-bit was installed on
the ATCA processor blade.
Performance with DPDK
The DPDK can be run in different modes, such as Bare
Metal, Linux with Bare Metal Run-Time and Linux User
Space. The Linux User Space mode is the easiest to use in
the initial development stages. Details of how the DPDK
functions in Linux User Space Mode are shown in Figure 4.
After compiling the DPDK target environment, an IP
forwarding application can be run as a Linux User Space
ADLINK aTCA-8505 Shelf
Ixia XM12 Test System
Test Monitor
Flow2: 1GbE to Front Panel
Switch Blades
Flow1: 1GbE to Front Panel
Flow4: 1G to Base
Flow3: 1G to Base
Flow6: 10G to Fabric
Flow5: 10G to Fabric
1 Gigabit Ethernet
10 Gigabit Ethernet
Figure 3: IP Forwarding Test Environment used for benchmarking.
Figure 4: Intel DPDK running in Linux User Space Mode 33
After testing the ATCA processor blade under native Linux
and with the data-plane development kit provided by the
CPU manufacturer, we compared the IP forwarding per-
formance in these two configurations from the four 1GbE
interfaces (2 from the front panel and 2 from the Base
Interfaces) and two 10GbE Fabric Interfaces. In addition, we
benchmarked the combined IPv4 forwarding performance
of the processor blade using all six interfaces simultane-
ously (four 1GbE interfaces and two 10GbE interfaces).
Performance comparison using four 1GbE interfaces
When running IPv4 forwarding on the four 1GbE
interfaces of the processor blade with native Linux IP for-
warding enabled, a rate of 1 million frames per second can
be sustained with a frame size of 64 bytes. As the frame
size is increased to 1024 bytes, native Linux IP forwarding
can approach 100% of the line rate. But in the real world,
frame sizes are usually smaller than 1024 bytes, so 100%
line rate forwarding is not achievable. However, with the
DPDK running on only two CPU threads under the same
Linux OS, the processor blade can forward frames at 100%
line speed without any frames lost regardless of the frame
size setting, as shown in Figure 5.
The ATCA processor blade running the DPDK provides
almost 6 times the IP forwarding performance compared
to native Linux IP forwarding.
Performance comparison using two 10GbE interfaces
Running the IP forwarding test on the two 10GbE Fabric
Interfaces shows an even greater performance gap
between native Linux and DPDK-based IP forwarding than
that using four 1GbE interfaces. As shown in Figure 6, the
processor blade with DPDK running on only two threads
provides a gain of more than 10 times IP forwarding per-
formance compared to native Linux using all available
CPU threads.
Total IPv4 forwarding performance of the processor
Testing the combined IP forwarding performance of the
processor blade using all available interfaces (two 10GbE
Fabric Interfaces, two 1GbE front panel interfaces and two
1GbE Base Interfaces), the processor blade with the DPDK
can forward up to 27 million frames per second when the
frame size is set to 64 bytes. In other words, up to 18Gbps
of the theoretical 24Gbps throughput can be forwarded
(i.e., 75.3% of the line rate). Furthermore, the throughput
in terms of the line rate increases to 92.3%, even up to
99%, when the frame size is set to 128 bytes and 256 bytes
The reasons why the DPDK can consolidate more pow-
erful IP forwarding performance than available with
native Linux come mainly from the DPDK design features
described below.
Polling mode instead of interrupts
Generally, when packets come in, native Linux receives
interrupts from the network interface controller (NIC),
schedules the softIRQ, proceeds with context switching,
and invokes system calls such as read() and write().
IPv4 L3 Forwarding Performance of Native Linux and Intel DPDK
(ADLINK aTCA6200 with 4x 1GbE interfaces)
100% 100% 100% 100% 100% 100%
64 128 256 512 1024 1518
Packet Size (bytes)
Frames per Second
% of Line Rate
fps  Linux fps  DPDK
% of line rate  Linux
% of line rate  DPDK
Figure 5: IP Forwarding performance comparison using 4x 1GbE interfaces
34 Engineers’ Guide to Embedded Linux and Android 2013
In contrast, the DPDK uses an optimized poll mode driver
(PMD) instead of the default Ethernet driver to pull the
incoming packets continuously, avoiding software inter-
rupts, context switching and invoking of system calls.
This saves significant CPU resources and reduces latency.
Huge page instead of traditional pages
Compared to the 4 kB pages of native Linux, using larger
pages means time savings for page look-ups and the reduced
possibility of a translation look aside buffer (TLB) cache miss.
The DPDK runs as a user-space application by allocating
huge pages in its own memory zone to store frame buffer,
IPv4 L3 Forwarding Performance of Native Linux and Intel DPDK
(ADLINK aTCA6200 with 2x 10GbE interfaces)
1934474 1925674 1929346
1926566 1927670 1607140
99.90% 99.90% 99.90%
64 128 256 512 1024 1518
Packet Size (bytes)
Frames per Second
% of Line Rate
fps  Linux fps  DPDK
% of line rate  Linux
% of line rate  DPDK
IPv4 L3 Forwarding Performance of ADLINK aTCA6200 and Intel DPDK
(with 2x 10GbE Fabric + 4x 1GbE)
99.9% 99.9% 99.9% 100.0%
64 128 256 512 1024 1518
Packet Size(Byte)
Frames per Second
% of Line Rate
actual fps theoretical fps
% of line rate
theoretical line rate
Figure 6: IP Forwarding performance comparison using 2x 10GbE interfaces
Figure 7: IP Forwarding performance comparison using 2x 10GbE + 4x 1GbE interfaces 35
ring and other related buffers, that are out of the control
of other applications, even the Linux kernel. In the test
described in this white paper, a total of 1024@2MB huge
pages are reserved for running IP forwarding applications.
Zero-copy buffers
In traditional packet processing, native Linux decapsulates
the packet header, and then copies the data to the user
space buffer according to the socket ID. Once the user space
application finishes processing the data, a write system
call is invoked to send out data to the kernel, which takes
charge of copying data from the user space buffer to the
kernel buffer, encapsulates the packet header and finally
sends it out via the relevant physical port. Obviously, the
native Linux process sacrifices time and resources on buffer
copies between kernel and user space buffers.
In comparison, the DPDK receives packets at its reserved
memory zone, which is located in the user-space buffer, and
then classifies the packets to each flow according to configured
rules without copying to the kernel buffer. After processing
the decapsulated packets, it encapsulates the packets with
the correct headers in the same user-space buffer, and finally
sends them out to the relevant physical ports.
Run-to-implement and core affinity
Prior to running applications, the DPDK initializes to
allocate all low-level resources, such as memory space, PCI
device, timers, consoles, which are reserved for DPDK-
based applications only. After initialization, each of the
cores are launched to take over each execute unit, which
run the same or different workloads, depending on the
actual application requirements.
Moreover, the DPDK provides a way to set each execute
unit running in each core to keep more core affinity, thus
avoiding cache misses. In the tests described, the physical
ports of the processor blade are bound to two different
CPU threads according to affinity.
Lockless implement and cache alignment
The libraries or APIs provided by the DPDK are optimized
to be lockless to prevent dead locks for multi-thread appli-
cations. For buffer, ring and other data structures, the
DPDK also optimizes them to be cache aligned to maximize
cache-line efficiency and minimize cache-line contention.
By analyzing the results of our tests using the ATCA pro-
cessor blade’s four 1GbE interfaces and two 10GbE Fabric
Interfaces with and without the data plane development
kit provided by the CPU manufacturer (Figures 5 and 6),
we can conclude that running Linux with the DPDK and
using only two CPU threads for IP forwarding can achieve
approximately 10 times the IP forwarding performance of
that achieved by native Linux with all CPU threads run-
ning on the same hardware platform.
As is evident in Figure 7, the IPv4 forwarding performance
achieved by the processor blade with the DPDK makes it cost-
and performance-effective for customers to migrate their
packet processing applications from network processor-
based hardware to x86-based platforms, and use a uniform
platform to deploy different services, such as application pro-
cessing, control processing and packet processing services.
Jack Lin is the team manager of Platform Inte-
gration and Validation, Embedded Computing
Product Segment, which focuses on validat-
ing ADLINK building blocks and integrating
application-ready platforms for end customers.
He holds a B.S. and M.S. in information and
communication engineering from Beijing JiaoTong University.
Prior to joining ADLINK, he worked for Intel and Kasenna.
Yunxia Guo is a PIV software system engineer
in ADLINK’s Embedded Computing Product
Segment and holds a B.S. in communication
engineering from Hubei University of Technol-
ogy and an M.S. in information and communi-
cation engineering from Wuhan University of
Xiang Li is a member of the platform integra-
tion and validation team in ADLINK’s Embed-
ded Computing Product Segment. He holds a
B.S. in electronic and information engineering
from Shanghai Tongji University.
Engineers’ Guide to Embedded Linux and Android 2013
EMAC, inc.
EMAC, inc.
2390 EMAC Way
Carbondale, IL 62901
618-529-4525 Telephone
618-457-0110 Fax

ARM9 400Mhz Fanless Processor, Up to 256 MB
SDRAM, Up to 1 GB Flash, up to 8MB Bytes of Serial
Data Flash

480x272 WQVGA TFT LCD with 4 wire Resistive
Touch Screen

3 serial RS232 serial ports with handshaking & 1
RS232/422/485, 1 USB 2.0 (High Speed) Host port, 1
USB 2.0 (High Speed) OTG Port, 1 SPI & 1 I2C port

2 MicroSD Flash Card Sockets, 1 Audio Beeper,
Timer/Counters and Pulse Width Modulation (PWM)
ports, 4 Channel 10-bit Analog-to-Digital converter

Dimensions: 4.8 “ L x 3.0” W x 1.2” H

Power Supply Voltage: +5V DC to 35V DC, with PoE
User Interface for Process Control & Industrial Automation
PPC E4+ Compact Panel PC
Compatible Architectures: ARM
The PPC-E4+ is an ultra compact Panel PC with a 4.3”
WQVGA (480 x 272) TFT color LCD and 4 wire resistive
touch screen. The dimensions of the PPC-E4+ are 4.8” by
3.0”, about the same dimensions as that of popular touch
cell phones. The PPC-E4+ comes with either Windows CE
6.0 or EMAC’s Embedded Linux distribution installed and
fully configured on the onboard flash. Just apply power
and watch the User Interface appear on the vivid color
LCD. The PPC-E4+ compact Panel PC utilizes a System
on Module (SoM) for the processing core. This allows the
user to easily upgrade, if more memory capacity, storage
capacity or processing power is required. The PPC-E4+
includes an embedded ARM 9 SoM; this ARM Single
Board Computer features a 400Mhz Fanless Low Power
Processor with video and touch. The SoM provided with
the PPC-E4+ supports up to 256MB of SDRAM, up to 1GB
of Flash, and up to 8MB of serial data flash. Typical power
consumption is less than 5 Watts and the LED backlight
can be shut off when not in use to further decrease its
power consumption. The PPC-E4+ offers three RS-232
serial ports, and one RS232/422/485 port. Also provided
are two USB 2.0 ports , an Audio Beeper and a battery
backed real time clock. 2 MicroSD flash card sockets
provide for additional Flash storage. The PPC-E4+ can be
connected to a network using the 10/100 Base-T Ethernet
controller and its onboard RJ-45 connector.
The PPC-E4+ starts at $375 USD per unit.

Low Power Consumption – ARM9 400Mhz Fanless

4.3” Color LCD with Resistive Touch Screen

Compact Open-Frame Design

Ready to Run Operating System Installed on Flash

Power Over Ethernet and Audio with Line-in/out
Boards Software Productst
TeamF1, Inc.
TeamF1, Inc.
39159 Paseo Padre Parkway
Suite 121
Fremont, CA 94538
+1 (510) 505-9931 ext. 5 Telephone
+1 (510) 505-9941 Fax

Advanced networking capabilities through IPv6, IPv6-
to-IPv4 tunnel, UPnP, DLNA, etc with ironclad home
area network security features including a packet
filtering firewall, content filtering and wireless intru-
sion prevention, etc

Wireless networking with latest 802.11 and wireless
security standards (WEP/WPA/WPA2)

Pre-integrated rich media applications – Media
Manager, Streaming Manager, Download Manager
and App Manager – manageable through web-based
device management and setup/configuration wizards

Extensively validated on a variety of embedded OSs
(including VxWorks and Linux), and CPU platforms that
include ARM/Xscale, MIPS, PowerPC, and x86 processors
Consumer Premises Equipment; Home Gateway Devices;
Residential WLAN AP appliances; Home/SOHO NAS;
Print / File Server; Media sharing / streaming / rendering
devices; Audio/Video bridge; Broadband access
SecureF1rst CPE Gateway Solution
Compatible Architectures: ARM, MIPS, Power, x86
TeamF1’s SecureF1rst CPE Gateway Solution (CGS) is
a comprehensive turnkey software package enabling
the next-generation of rich, auto-provisioned residen-
tial gateways and CPE routers deployed by broadband
Service Providers (SPs). A member of TeamF1’s Secure-
F1rst line of prepackaged solutions, SecureF1rst CGS
enables OEMs/ODMs/SPs to deliver advanced home
area networking devices for a seamless and secure
“connected-home” experience to end-customers.
Devices built around SecureF1rst CPE Gateway Solution
offer end-customers zero-touch intelligent networking
for heterogeneous home area network devices with an
easy-to-use application and device management inter-
face. SecureF1rst CPE Gateway Solution based devices
open up the possibility of alternate revenue streams for
SPs through application oriented architecture allowing
installation and subscription to OSGi based applications
from SPs or third parties along with automatic remote
configuration and provisioning capabilities. SecureF1rst
CGS offers cloud-friendliness and the flexibility of net-
work attached storage enabled features such as media
sharing/streaming/rendering and download manage-
ment through an easily manageable media centric and
secure residential gateway device. Unique, customized
or “branded” residential gateway device graphical user
interfaces (GUIs) are available for OEMs/SPs.

Feature-rich easy-to-use SP CPE gateway solution
reduces development costs, risk, and time-to-market

Enables product differentiation through advanced
security and end-user features such as parental
control, secure access to connected storage and
easy-to-use media sharing

Enhances user experience through zero-touch connec-
tivity of heterogeneous home area network devices

Opens alternate revenue streams for SPs through the
flexibility to install and use SP or third party applications
(custom or OSGi) with automatic remote management
capabilities through TR-069 family of protocols

Branding options offer a cost-effective, customized
look and feel

Standard, field-tested software solution in a production-
ready custom package, with all hardware integration,
porting, testing, and validation completed by TeamF1
Networking / Communication Packages
Networking / Communication Packages
Engineers’ Guide to Embedded Linux and Android 2013tSoftware Products
TeamF1, Inc.
TeamF1, Inc.
39159 Paseo Padre Parkway
Suite 121
Fremont, CA 94538
+1 (510) 505-9931 ext. 5 Telephone
+1 (510) 505-9941 Fax
Networking / Communication Packages
Networking / Communication Packages

Seamless, standards-based media sharing through
UPnP A/V and DLNA, flexible network storage add-on
applications using built-in OSGI framework

Easy-to-use intuitive GUI for standard users, with a
CLI available for advanced users, full media control-
ler (DMC) functionality via GUI for controlling the
streaming of media from various networked media
servers to renderers

Secure network storage with group based policies
and access control; flexibility to integrate authentica-
tion modules for secure access

Support for both built-in and external drives with
varied interfaces including IDE, SATA, and USB. All
popular file-systems (ext2, ext3, FAT16, FAT32, NTFS,
etc) and file transfer protocols (CIFS, NFS, AFS, FTP,
SFTP and HTTP) are supported
Wireless NAS Solutions; NAS as an add-on for Con-
sumer Premises Equipment; Home Gateway Devices;
Residential WLAN AP appliances; Home/SOHO NAS;
Media sharing / streaming / rendering routers
SecureF1rst Network
Attached Storage Solution
Compatible Architectures: ARM, MIPS, Power, x86
Compatible Linux and Android OS: Embedded Linux
TeamF1’s SecureF1rst Network Attached Storage (NASS)
is a stand-alone prepackaged turnkey NAS software
solution or add-on module offering network storage and
sharing services in a secure local-area network environ-
ment. With user-based access control, intuitive graphical
user interface and media streaming, SecureF1rst NASS
provides an innovative network storage solution with
built-in applications for end-users to easily store, share
and manage information across network devices. Secure-
F1rst NASS benefits OEMs, ODMs and service providers’
end-customers with a state-of-the-art network storage
solution for a secure data sharing experience. Cloud-
friendly, its flexibility allows installation and subscription
to third-party applications for home and business usage
through a friendly graphical user interface for novice
users and a command line interface for advanced users.
When coupled as an add-on to other TeamF1 Secure-
F1rst solutions such as CPE Gateway Solution, Managed
Access Point Solution and Security Gateway Solution,
NASS offers secure network storage and access to the
network users with various network attached storage
applications including automatic downloading of tor-
rents, digital media server and controller capabilities and
disk and partition management features.

Proven TeamF1 SecureF1rst software components
and common framework reduce OEMs’ risk

Rich media centric pre-integrated applications
support with flexibility to install and subscribe to
third-party applications

Intelligent networking with zero-touch connectivity
of various home and business network devices to
Network Attached Storage

Support for various disk interfaces, file-system
formats and file types with true plug-and-play nature

Branding options offer a cost-effective, customized
look and feel

Production-ready solution, with all hardware integra-
tion, porting, testing, and validation on a variety of
embedded OSs (including VxWorks and Linux), CPU
platforms (ARM/Xscale, MIPS, PowerPC, x86, etc),
completed by TeamF1 Software Productst
TeamF1, Inc.
TeamF1, Inc.
39159 Paseo Padre Parkway
Suite 121
Fremont, CA 94538
+1 (510) 505-9931 ext. 5 Telephone
+1 (510) 505-9941 Fax
Networking / Communication Packages
Networking / Communication Packages

Wireless AP Gateway with advanced SSL + IPsec VPN /
Firewall / WIPS / Gateway AV / Web Filtering capabilities
for an all-in-one wired + wireless LAN solution

Friendly browser-based remote web-management
provided by interfaces that utilize an easy-to-under-
stand, step by-step wizard simplifies configuration of
even the most advanced VPN tunnels schemes

TR-069, SNMP and powerful SSH-secured command
line interface to enable configuring, monitoring and
provisioning of a gateway device

Extensively validated on a variety of embedded OSs
(including VxWorks and Linux), and CPU platforms that
include ARM/Xscale, MIPS, PowerPC, and x86 processors
Broadband access; Carrier Class Networking; Enterprise
Data Networking; General Aerospace and Defense;
Industrial Automation; Instrumentation; Medical; Net-
working Technologies; Safety Critical Avionics; Server
and Storage Networking
SecureF1rst Security
Gateway Solution
Compatible Architectures: ARM, MIPS, Power, x86
Compatible Linux and Android OS: Embedded Linux
TeamF1’s SecureF1rst Security Gateway Solution is a com-
prehensive turnkey software package combining a rich
set of field-proven, standard components with an array of
customizable options to provide OEMs/ODMs the ultimate
in product flexibility. It enables OEMs to build fully inte-
grated UTM devices allowing users to carve security zones
and manage security policies in a centralized manner. A
member of TeamF1’s SecureF1rst line of innovative pre-
packaged solutions, SecureF1rst SGS allows OEMs/ODMs
to deliver leading-edge VPN/firewall/IPS/Gateway AV
devices to the small-to-medium businesses (SMB) market
in record time at far less risk than traditional development
approaches. Devices built around SecureF1rst SGS offer
end-customers ironclad, advanced networking security;
easy-to-use device management features; and multiple
gateway options and can also be customized, or “branded”
with unique graphical user interfaces (GUIs). With Secure-
F1rst SGS, OEMs can build gateways between multiple
LAN, WAN, and DMZ interfaces – plus any other security
zones – of several different types. WAN interfaces can
include DSL cable modem, Ethernet, cellular data (3G/LTE/
WiMAX) links, or even a Wi-Fi® client link. LAN interfaces
can include a simple Ethernet port connected to an external
switch, a built-in Ethernet switch (an unmanaged or “smart”
managed switch), or an 802.11 a/b/g/n Wi-Fi access point.

Less risk for OEMs through proven TeamF1
SecureF1rst software components and common
framework’s comprehensive set of features enabling
full customization of devices

Extensive support for advanced 802.11 standards for
security, QoS, mobility, and roaming

Advanced protocols such as IPsec, VPN, SSL (includ-
ing OpenVPN compatible SSL), etc. provide ironclad
networking security features

Branding options offer a cost-effective, customized
look and feel

Advanced device management through SNMPv3,
CLI, TR-069, and easy-to-use web interface, etc, with
the ability to dynamically extend router functionality
through TeamF1 and third-party extensions / plug-ins

Standard, field-tested software solution in a production-
ready custom package, with all hardware integration,
porting, testing, and validation completed by TeamF1
40 Engineers’ Guide to Embedded Linux and Android 2013
Intel Expands Semiconductor
IP in Handset Bid
The processor giant seems serious about the handset market as shown by its
BIOS collaboration with Phoenix and acquisition of Interdigital 3G patents.
By John Blyler
Several recent
suggest that Intel
wants to become
a significant
player in the rap-
idly expanding
mobile handset
The first story
seems innocuous
enough. Phoenix
Technologies, a
long-time devel-
oper of PC Basic
Input Output System (BIOS) firmware, recently announced an
agreement with Intel to jointly develop the new reference Uni-
fied Extensible Firmware Interface (UEFI) for the Intel code
BIOS boot firmware is the first code to run when a PC is pow-
ered on. In a drive (no pun) to modernize the booting process,
the UEFI community has created a specification to define the
software interface between an operating system and platform
firmware (Figure 1). This specification was designed to be both
processor- and device driver-independent. UEFI capable sys-
tems are already been shipped by major desktop OEMs.
Most PC motherboard suppliers license a BIOS core from
third-party vendors like American Megatrends (AMI), Insyde
Software, and Phoenix Technologies. The board suppliers then
customize the BIOS core to address different hardware needs
for their various product lines.
The processor- and driver-independent nature of the UEFI
specification complements Phoenix’s latest Secure Core Tech-
nology (SCT) tool that enables a universal build environment.
This means that a single BIOS can be used across numerous
operating systems and silicon platforms, thus improving code
efficiency and reliability. SCT is targeted at servers, notebooks,
desktop and embedded devices.
According to Steve Chan, Phoenix’s CTO, the company has
engagements with Intel’s desktop PC client group to col-
laborate on a reference BIOS. Additionally, the company is
providing engineering support to Intel’s server group. When
asked if the reference BIOS would be used on Intel’s embedded
products including mobile, Chan could only say that the BIOS
could be applied across all platforms.
The question concerning cross-platform BIOS development is
important as it suggests further evidence that Intel is serious
about the handset and tablet markets. Earlier this year, the
company announced that the European carrier Orange would
support its single-core Atom Z2460 processor powered handset
running the Google’s Android operating system (Figure 2).
Additional evidence of Intel’s move into the handset space
comes from its recent acquisition of the Interdigital wireless
patent portfolio. Interdigitial’s patents cover 3G and newer
wireless technology used by
both computer and mobile
In collaborating with or
acquiring patent-rich com-
panies, Intel is employing a
strategy common to handset
and tablet manufacturers. By
amassing a large number of IP
and patents, companies hope
to strengthen their negoti-
ating power among rivals.
Currently, Apple, Motorola,
Google, HTC and Samsung –
to name a few – are involved
in law suits that began as
patent disputes. (see, “IP
Patent Wars: Technical Fri-
volity vs. Substance?” http://
John Blyler is the editorial director of Extension
Media, which publishes Chip Design and
Embedded Intel® Solutions magazine, plus over
36 EECatalog Resource Catalogs in vertical
market areas.
Figure 1: Unified Extensible Firmware
Interface (UEFI) seeks to standardize BIOS
Figure 2: Intel’s Atom-based
(SKU: Z2460) handset will run on
the Orange carrier in Europe.
The only event where you can connect with the entire 4G and mobile broadband industry, new products
and over 200 exhibitors & sponsors!
4G World 2012 Conference and Expo focuses on the most challenging issues and obstacles facing
operators as they try to manage the cost of unbridled traffi c growth and exploit opportunities to capture
suffi cient revenues and generate a return on their 4G infrastructure investments.
You need to attend if:
Featuring 3 New Event Summits:
t4."--$&--46..*5 t41&$536.46..*5  t.0#*-&$-06%46..*5
SAVE $200 off VIP &
4 Day Conference Pass,
use priority code:
0$5o/07 
.D$03.*$,1-"$& $)*$"(0
Chairman and CEO,
2020 Venture Partners
General Manager,
Communications Infrastructure
Division, Intelligent Systems
Group, Intel
CTO and President,
AT&T Labs
T-Mobile USA
Chair, Femto Forum
Senior Vice President/General
Manager, Mobile Internet
Technology Group (MITG),
Wireless Division,
Chief Technical Offi cer and
Head of Marketing and Strategy,
Ericsson North America
CTO, Wireles Product
Solutions, Huawei
President and CEO,
4G Americas
Vice President, Americas,
Google Enterprise
Senior Vice President of Global
Marketing and Investor Relations,
CTO and VP, Technology
Development and
Strategy, Sprint
Summit Co-host &
Endorsing Associations
Corporate Hosts:
Premier Sponsors:
Publisher of the Offi cial Show Daily:
Premier Media Sponsors:
Offi cial Technical
Training Partner:
Organized by:
Platinum Sponsors:
Gold Sponsors:
Leading Embedded
Development Tools


A full featured development solution
for all ARM
Powered platforms