Nissan Announces Plans to Release Driverless Cars by 2020

stagetofuΤεχνίτη Νοημοσύνη και Ρομποτική

29 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

92 εμφανίσεις



August 29, 2013, 6:00 am
1 Comment


Nissan Announces Plans to Release Driverless Cars by 2020

By
PAUL STENQUIST

Nissan North America Nissan announced plans this week to roll out
market
-
ready autonomous
-
drive vehicles by 2020.

If Nissan has its way, reading your e
-
mail while driving to work may soon be acceptable behind
-
the
-
wheel
behavior. The automaker says it will
market autonomous
-
drive vehicles



cars that can operate without the
assistance of a driver


by 2020. Carlos Ghosn, chief executive of Nissan, said in a news release, “I am
committing to be ready to introduce a new groundbreaking technology, auto
nomous drive, by 2020, and we are
on track to realize it.”

A host of advanced equipment is needed for autonomous operation, including cameras that can see the area
surrounding the vehicle; radar sensors that measure distance; laser scanners that detect the

shape of objects; a
global
-
positioning sensor that locates the vehicle; advanced computer systems that apply artificial intelligence
to that data and make driving decisions; and a variety of actuators that can execute driving maneuvers while
compensating
for less than ideal conditions.

In a speech at a media event in California, Andy Palmer, a Nissan executive vice president, said, “Our
autonomous
-
driving vehicles utilize cameras, sensors, global positioning sensors and machine technologies


including the

Safety Shield

features already offered in many of our models


to maneuver with reduced human
intervention, or without any human intervention.”

Nissan North America Mitsuhiko Yamashita
, Nissan’s executive vice
president for research and development, stands with a test vehicle in Irvine, Calif.

That suggests levels of autonomy that a driver could select. In an e
-
mail, Steve Yaeger, a Nissan spokesman,
confirmed that autonomous operation
was driver
-
selectable in prototype vehicles the automaker demonstrated at
a recent media event.

Some of today’s vehicles, including some made by Nissan, qualify as semiautonomous. Intelligent cruise
control can keep track of a vehicle’s place in traffic an
d adjust speed accordingly. Lane
-
departure warning
systems can alert the driver if the vehicle crosses over lane markings. Lane
-
departure prevention systems go a
step further and apply corrective measures. Intelligent braking systems can bring a vehicle to

a halt if the driver
fails to brake in time. But it’s a giant step from these aids to fully autonomous operation. Providing assistance
when a driver fails to react is a technical challenge, but developing a foolproof artificial intelligence system that
ca
n make all driving decisions is far more complex.

Technical hurdles are just one of the problems that an autonomous vehicle pioneer faces. Bryan Reimer, a
research scientist engaged in
driver workload studies

at the Massachusetts Institute of Technology, isn’t sure
that humans can cope with these technologies. His research, and the work of others in the field, has determined
that the sweet spot for driver awareness

is somewhere between understimulated and overstimulated.

“We are capable of developing the sensors and systems for an autonomous vehicle, but do we know how people
will interact?” he said in a telephone interview. “What happens when people start driving t
hem? Autonomy
complacency among pilots has become a problem in aviation. The broad issue is not whether we can develop
the technologies, but whether we can develop cohesive interfaces that drivers can operate successfully without
losing their skills.”

In M
ay, the National Highway Traffic Safety Administration
announced plans for research

on safety issues
related
to autonomous vehicles. A policy statement expressed support for technologies that “have the potential
to reduce significantly the many thousands of fatalities and injuries that occur each year as a result of motor
vehicle crashes.” The agency has not publ
ished standards but has said, “Research will be performed to support
the development of any potential technical requirements for automated vehicle systems.”

There is also the question of liability. Will the vehicle occupant who is not actually at the cont
rols of an
autonomous vehicle be liable if that vehicle is involved in an accident, or will the manufacturer that engineered
the driving system have to accept responsibility?

Nissan’s introduction of the fully electric Leaf before the development of the in
frastructure necessary to support
E.V.’s was a bold move. An autonomous
-
drive vehicle takes the company’s boldness to a new level. But Nissan
is making the leap and has begun development of a proving ground in Japan that would enable testing of
autonomous
vehicles in real
-
world conditions.

“Nissan Motor Company is ready,” Mr. Palmer said. “We are on a mission to be the most progressive car
company in the world, and to redefine how motorists interact with their vehicles.”

With the driverless Nissan schedule
d to appear in seven years, the world may be ready when it arrives. But the
technology is not ready yet. In its Preliminary Statement of Policy Concerning Automated Vehicles,
N.H.T.S.A., which will have the final word on autonomous vehicles in the United S
tates, wrote that it “does not
recommend that states authorize the operation of self
-
driving vehicles for purposes other than testing at this
time,” adding, “We believe there are a number of technological issues as well as human performance issues that
mus
t be addressed before self
-
driving vehicles can be made widely available.”





If Our Gadgets Could Measure Our Emotions

By
JENNA WORTHAM

Publ i shed: June 1, 2013

ON a recent family outing, my mother and sister got into a shouting match. But th
ey weren’t mad at each
other


they were yelling at the iPhone’s turn
-
by
-
turn navigation system. I interrupted to say that the
phone didn’t understand


or care


that they were upset.

Enl arge This Image


Yuko Shimizu


“Honey, we know,” my mom replied. “But it should!”

She had a point. After all, computers and technology are becoming only smarter, faster and more intuitive.
Artificial intelligence is creeping into our lives at a steady pace. Devices and apps can anticipate what we
need, sometimes even before we realize i
t ourselves. So why shouldn’t they understand our feelings? If
emotional reactions were measured, they could be valuable data points for better design and development.
Emotional artificial intelligence, also called affective computing, may be on its way.

But should it be? After all, we’re already struggling to cope with the always
-
on nature of the devices in our
lives. Yes, those gadgets would be more efficient if they could respond when we are frustrated, bored or
too busy to be interrupted, yet they woul
d also be intrusive in ways we can’t even fathom today. It sounds
like a science
-
fiction movie, and in some ways it is. Much of this technology is still in its early stages, but
it’s inching closer to reality.

Companies like
Affectiva
, a start
-
up spun out of the
M.I.T. Media Lab
, are working on software that trains
computers to recognize human emotions based on their

facial expressions and physiological responses. A
company called
Beyond Verbal
, which has just raised

close to $3 million in venture financing, is working
on a software tool that can analyze speech and, based on the tone of a person’s voice, determine whether it
indicates qualities like arrogance or annoyance, or both.

Microsoft

recently revealed the
Xbo
x One
, the next
-
generation version of its flagship game console, which
includes an update of Kinect, its motion
-
tracking device that lets people control games by moving their
hands and bodies. The new Kinect, which goes on sale later this year, can be cont
rolled by voice but is not
programmed with software to detect emotions in those interactions.

But it does include a higher
-
definition camera capable of tracking fine skeletal and muscular changes in
the body and face. The machine can already detect the ph
ysics behind bodily movements, and calculate
the force behind a punch or the height of a jump. In addition, one of the Kinect’s new sensors uses infrared
technology to track a player’s heartbeats. That could eventually help the company detect when a player
’s
pulse is racing during a fitness contest


and from excitement after winning a game. For avid gamers like
myself, the possibilities for more immersive, interactive play are mind
-
boggling.

Albert Penello, a senior director of product planning at Microso
ft, says the company intends to use that
data to give designers insight into how people feel when playing its games


a kind of feedback loop that
can help shape future offerings and experiences. He says Microsoft takes privacy very seriously and will
requ
ire game developers to receive explicit permission from Xbox One owners before using the data.

Microsoft says games could even adapt in real time to players’ physical response, amping up the action if
they aren’t stimulated enough, or tamping it down if i
t’s too scary. “We are trying to open up game
designers to the mind of the players,” Mr. Penello said. “Are you scared or are you laughing? Are you
paying attention and when are you not?”

Eventually, he said, the technology embedded in the Kinect camera c
ould be used for a broader range of
applications, including tracking reactions while someone is looking at ads or shopping online, in the hope
of understanding what is or isn’t capturing the person’s interest. But he said those applications were not a
top
priority for the company. (Some companies have experimented with technologies like eye
-
tracking
software to see what parts of commercials draw the most attention from viewers.)

Online media companies like Netflix, Spotify and Amazon already have access to

real
-
time consumer
sentiment, knowing which chapters, parts of songs, movies and TV shows people love, hate, skip and like
to rewatch. Such data was used to engineer the popular online Netflix series
“House of Cards,”

whose
creators had access to data about people’s television viewing habits.

So it is not much of a leap to imagine Kinect
-
like sensors, and tools like the ones Affecti
va and Beyond
Verbal are developing, being used to create new entertainment, Web browsing and search experiences.

The possibilities go far beyond that.
Prerna Gupta
, chief pro
duct officer at
Smule
, a development studio
that makes mobile games, spoke about the subject at
South by Southwest
, the c
onference in Austin, Tex.,
in March. She called her talk “Apps of the Future: Instagram for Cyborgs,” and gazed far into the future of
potential applications.

She says she thinks industries like health care may be revolutionized by emotionally aware technology


particularly as we enter a time when laptops, smartphones, smart watches, fitness trackers and home
media and game consoles interact with one another.


“Tracking how our bodies are responding throughout the day could allow you to tailor your life according
to what’s happening to your body throughout the day,” she said. It could allow nutritionists to carefully
build meal plans for clients, or for doctors

to come up with more efficient medical treatments.

But that could be just a start. “When we are wearing five different computers and they can all talk to each
other, that sort of input information will cause an exponential increase” in what humans can do
, Ms.
Gupta said.

OF course, the range of ethical and privacy concerns is enormous.

Clive Thompson, author of a forthcoming book, “
Smarter Than You Think
: How Technology Is Changing
Our Minds for the Better,” says that these exciting possibilities need to be explo
red very carefully.

“We are talking about massive archives of personal data that are really revealing,” Mr. Thompson said.
“Not to mention that there is definitely something unsettling about emotion recognition becoming another
part of our lives that is a
rchived and scrutinized.”

He said an insurance company, for example, might want to know its customers’ moods


so it can raise
their fees if they show signs of becoming depressed or sick. And employers might want to know when their
staff members are bored
, so they can give them more work or reprimand them if their attention wanders
during an important meeting. He wondered whether we would all become better at masking our emotions
if we knew that we were being watched and analyzed. And could machines use wh
at they know about our
emotions to manipulate us into buying things?

Once a phone really does understand our emotions, the possibilities


good and bad


seem to spiral
without limit. We’re not there yet, but the future starts now.














Smart Dro
nes


By
BILL KELLER

Publ i shed: March 16, 2013
200 Comments

IF you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a
sus
pected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine
that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is
minimal risk of collateral damage, and then, with
no human in the loop, pulls the trigger.

Welcome to the future of warfare. While Americans are debating the president’s power to order
assassination by drone, powerful momentum


scientific, military and commercial


is propelling us
toward the day when w
e cede the same lethal authority to software.

Next month, several human rights and arms control organizations are meeting in London to introduce a
campaign to ban killer robots before they leap from the drawing boards. Proponents of a ban include many
of
the same people who succeeded in building a civilized
-
world consensus against the use of crippling and
indiscriminate land mines. This time they are taking on what may be the trickiest problem arms control
has ever faced.

The arguments against developing
fully autonomous weapons, as they are called, range from moral (“they
are evil”) to technical (“they will never be that smart”) to visceral (“they are creepy”).

“This is something people seem to feel at a very gut level is wrong,” says Stephen Goose, dire
ctor of the
arms division of Human Rights Watch, which has assumed a leading role in
challenging the dehumanizing
of warfare
. “The ugh factor comes through really strong.”

Some robotics experts doubt that a computer will ever be able to reliably distinguish between an enemy
and an innocent, let alone judge whether a load of explosives is the right, or proportional, response. What
if the potential target is already wounded, o
r trying to surrender? And even if artificial intelligence
achieves or surpasses a human level of competence, the critics point out, it will never be able to summon
compassion.

Noel Sharkey, a computer scientist at the University of Sheffield and chairman

of the International
Committee for Robot Arms Control, tells the story of an American patrol in Iraq that came upon a group of
insurgents, leveled their rifles, then realized the men were carrying a coffin off to a funeral. Killing
mourners could turn a w
hole village against the United States. The Americans lowered their weapons.
Could a robot ever make that kind of situational judgment?

Then there is the matter of accountability. If a robot bombs a school, who gets the blame: the soldier who
sent the mac
hine into the field? His commander? The manufacturer? The inventor?

At senior levels of the military there are misgivings about weapons with minds of their own. Last
November the Defense Department
issued

what amounts to a 10
-
year moratorium on developing them
while it discusses the ethical implications and possible safeguards. It’s a squishy directive, likely to be cast
aside in a minute if we learn that China has sold autonomous we
apons to Iran, but it is reassuring that the
military is not roaring down this road without giving it some serious thought.

Compared with earlier heroic efforts to outlaw land mines and curb nuclear proliferation, the campaign
against licensed
-
to
-
kill rob
ots faces some altogether new obstacles.

For one thing, it’s not at all clear where to draw the line. While the Terminator scenario of cyborg soldiers
is decades in the future, if not a complete fantasy, the militaries of the world are already moving alon
g a
spectrum of autonomy, increasing, bit by bit, the authority of machines in combat.

The military already lets machines make critical decisions when things are moving too fast for deliberate
human intervention. The United States has long had Aegis
-
class

warships with automated antimissile
defenses that can identify, track and shoot down incoming threats in seconds. And the role of machinery is
expanding toward the point where that final human decision to kill will be largely predetermined by
machine
-
gene
rated intelligence.

“Is it the finger on the trigger that’s the problem?” asks Peter W. Singer, a specialist in the future of war at
the Brookings Institution. “Or is it the part that tells me ‘that’s a bad guy’?”

Israel is the first country to make and
deploy (and sell, to China, India, South Korea and others) a weapon
that can attack pre
-
emptively without a human in charge. The hovering drone called the Harpy is
programmed to recognize and automatically divebomb any radar signal that is not in its datab
ase of
“friendlies.” No reported misfires so far, but suppose an adversary installs its antiaircraft radar on the roof
of a hospital?

Professor Sharkey points to the Harpy as a weapon that has already crossed a worrisome threshold and
probably can’t be ca
lled back. Other systems are close, like the Navy’s X
-
47B, a pilotless, semi
-
independent, carrier
-
based combat plane that is in the testing stage. For now, it is unarmed but it is built
with two weapons bays. We are already ankle
-
deep in the future.

For m
ilitary commanders the appeal of autonomous weapons is almost irresistible and not quite like any
previous technological advance. Robots are cheaper than piloted systems, or even drones, which require
scores of technicians backing up the remote pilot. Thes
e systems do not put troops at risk of death, injury
or mental trauma. They don’t get tired or frightened. A weapon that is not tethered to commands from
home base can continue to fight after an enemy jams your communications, which is increasingly likely
in
the age of electromagnetic pulse and cyberattacks.

And no military strategist wants to cede an advantage to a potential adversary. More than 70 countries
currently have drones, and some of them are hard at work on the technology to let those drones off

their
virtual leashes.

“Even if you had a ban, how would you enforce it?” asks Ronald Arkin, a computer scientist and director of
the Mobile Robot Laboratory at Georgia Tech. “It’s just software.”

THE military


and the merchants of war


are not the on
ly ones invested in this technology. Robotics is
a hyperactive scientific frontier that runs from the most sophisticated artificial intelligence labs down to
middle
-
school computer science programs. Worldwide, organized robotics competitions engage a quart
er
of a million school kids. (My 10
-
year
-
old daughter is one of them.) And the science of building killer robots
is not so easily separated from the science of making self
-
driving cars or computers that excel at
“Jeopardy.”

Professor Arkin
argues

that automation can also make war more humane. Robots may lack compassion,
but they also lack the emotions that lead to calamitous mistakes, atroci
ties and genocides: vengefulness,
panic, tribal animosity.

“My friends who served in Vietnam told me that they fired


when they were in a free
-
fire zone


at
anything that moved,” he said. “I think we can design intelligent, lethal, autonomous systems th
at can
potentially do better than that.”

Arkin argues that autonomous weapons need to be constrained, but not by abruptly curtailing research.
He advocates a moratorium on deployment and a full
-
blown discussion of ways to keep humans in charge.

Peter Singer of Brookings is also wary of a weapons ban: “I’m supportive of the intent, to draw attention to
the slippery slope we’re going down. But we have a history that doesn’t make me all that optimistic.”

Like Singer, I don’t hold out a lot of hope
for an enforceable ban on death
-
dealing robots, but I’d love to be
proved wrong. If war is made to seem impersonal and safe, about as morally consequential as a video
game, I worry that autonomous weapons deplete our humanity. As unsettling as the idea of
robots’
becoming more like humans is the prospect that, in the process, we become more like robots.





A Motherboard Walks Into a Bar ...


W
HAT do you get when you cross a fragrance with an actor?

Answer: a smell Gibson.

Groan away, but you should know that this joke was written by a computer. “Smell Gibson” is the C.P.U.
child of something called
Standup

(for System to Augment Non
-
Speakers’ Dialogue Using Puns), a
program that generates punning riddles to help kids with language disabilities increase their verbal skills.

Though it’s not quite Louis C. K., the Standup program, engineered by a team of comp
uter scientists in
Scotland, is one of the more successful efforts to emerge from a branch of artificial intelligence known as
computational humor, which seeks to model comedy using machines.

As verbal interaction between humans and computers becomes more

prominent in daily life


from Siri,
Apple’s voice
-
activated assistant technology, to speech
-
based search engines to fully automated call
centers


demand has grown for “social computers” that can communicate with humans in a natural way.
Teaching compute
rs to grapple with humor is a key part of this equation.

“Humor is everywhere in human life,” says the Purdue computer scientist
Julia M. Taylor
, who helped
organize the first
-
ever United States symposium on the artificial intelligence of humor, in November. If we
want a computational system to communicate with human life, it needs to know ho
w to be funny, she says.

As it turns out, this is one of the most challenging tasks in computer science. Like much of language,
humor is loaded with abstraction and ambiguity. To understand it, computers need to contend with
linguistic sleights like irony
, sarcasm, metaphor, idiom and allegory


things that don’t readily translate
into ones and zeros.

On top of that, says
Lawrence J. Mazlack

of the Uni
versity of Cincinnati, a seminal figure in the field of
computational linguistics, humor is context
-
dependent: what’s funny in one situation may not be funny in
another. As an example, he cites Henny Youngman’s signature line, “Take my wife


please,” whic
h came
about by
accident

when an usher seating Youngman’s wife mistook the comedian’s request for a gag.

The cognitive proces
ses that cause people to snicker at this sort of one
-
liner are only partly understood,
which makes it all the more difficult for computers to mimic them. Unlike, say, chess, which is grounded
in a fixed set of rules, there are no hard
-
and
-
fast formulas for

comedy.

To get around that cognitive complexity, computational humor researchers have by and large taken a
more concrete approach: focusing on simple linguistic relationships, like double meanings, rather than on
trying to model the high
-
level mental mec
hanics that underlie humor.

Standup, for instance,
writes jokes

by searching through a “lexical database” (basically, a
huge dictionary)
for words that fit linguistic patterns found in puns


phonetic and semantic similarities, mostly


and
comes up with doozies like: “What do you call a fish tank that has a horn? A goldfish bull.”

Another tack has been to apply machine
-
le
arning algorithms, which crunch mountains of data to identify
statistical features that can be used to classify text as funny or unfunny. This is more or less how spam
filters work: they decide which messages to tag by analyzing billions of e
-
mails and com
piling a database
of red flags (like any urgent message from a deposed Nigerian prince).

Figuring out when a joke is a joke is where artificial intelligence researchers have made, perhaps, the most
progress. For her Ph.D. dissertation, Dr. Taylor
built a system

that could identify children’s jokes out of
various selections of prose with remarkable accuracy. Not only that, but

it could also explain why it found
something funny, which suggests that on some level it “got” the jokes.

In a related
experiment
, the computer scientists Rada Mihalcea at the University of North Texas, Denton,
and Carlo Strapparava, now at Fondazione Bruno Kessler in Italy, trained computers to separate
humorous one
-
liners from no
nhumorous sentences borrowed from Reuters headlines, proverbs and other
texts. By
analyzing

the content and style of these sentences, the pro
gram was able to spot the jokes with an
average accuracy of 87 percent.

Putting such research to good use, a pair of wags at the University of Washington last year taught a
computer when to use the refrain “That’s what she said”


theirs being one of the
few academic papers to
cite “The Office” among its references.

Some will surely wonder if the point of such research goes beyond devising software that can make the
C++ set crack up at hackathons. Thankfully, it does. The goal of computational humor, and
of
computational linguistics as a whole, is to design machines akin to the shipboard computer on “Star Trek”


ones that can answer open
-
ended questions and carry on casual conversations with human beings, even
William Shatner.

In the process, scientists
hope to gain insights into the nature of humor: Why do we laugh at certain things
and not at others? Why does anyone watch “Two and a Half Men”?

If computer humorists can answer any of these questions, we won’t just get a deeper understanding of how
language works but also, ultimately, what it means to be human.

Robotic Gadgets for Household Chores

By
KEVIN J. O’BRIEN

Publ i shed: December 9, 2012

BERLIN


Joseph Schlesinger, an engineer living near Boston, thinks ro
botic toys are too expensive, the
result of extravagant designs, expensive components and a poor understanding of consumer tastes. So this
year, Mr. Schlesinger, 23, began to manufacture an affordable robot, one he is selling for $250 to holiday
shoppers.


His creation, the Hexy
, is a six
-
legged, crablike creature that can navigate its own environment and
respond to humans with a hand wave or other programmable gesture. Mr. Schlesinger said he had been
able to lower production costs by using free software and by molding a lot of
the plastic parts locally in
Massachusetts, not in China.

Since setting up his company, ArcBotics, in suburban Somerville, Massachusetts, Mr. Schlesinger has built
a backlog of more than 1,000 orders. His goal, he said, was to become “the Ikea of robotics
.”

“I think the market for consumer robotics is poised to explode,” said Mr. Schlesinger, a graduate of
Worcester Polytechnic Institute in Massachusetts. “We are only at the beginning.”

Since the 1960s, robots have assumed major roles in industrial manuf
acturing and assembly, the remote
detonation of explosives, search and rescue, and academic research. But the devices have remained out of
reach, in affordability and practicality, to most consumers.

That, according to Professor Andrew Ng, the director of

the Artificial Intelligence Lab at Stanford
University in California, is about to change. One big reason, Mr. Ng said, is the mass production of
smartphones and game consoles, which has driven down the size and price of robotic building blocks like
accele
rometers, gyroscopes and sensors.

On the edges of consumer consciousness, the first generation of devices with rudimentary artificial
intelligence are beginning to appear: entertainment and educational robots like the Hexy, and a line of
tireless househol
d drones that can mow lawns, sweep floors, clean swimming pools and even enhance golf
games.

“I’m seeing a huge explosion of robotic toys and believe that there will be one soon in industry,” said Mr.
Ng, an associated professor of computer science at Sta
nford.

The most advanced robots remain exotic workhorses like NASA’s Mars Curiosity Rover, which cost $2.5
billion, and the LS3, a doglike robot being developed for the U.S. military that can carry a 400
-
pound, or
180
-
kilogram load more than 20 miles, or
about 30 kilometers. The mechanical beast of burden, whose
price is not public, is being made by a consortium led by Boston Dynamics. In Menlo Park, California,
engineers at Willow Garage, a robotics firm, are selling the two
-
armed, 5
-
foot
-
4 inch (1.63
-
met
er) rolling
robot called the PR2 for $400,000.

A video on Willow Garage’s Web site shows the PR2 fetching beer from a refrigerator, which while an
engineering and programming feat, is an expensive way to get beer.

“I think we’re still some years away fro
m useful personal robots making pervasive appearances in our
homes,” Mr. Ng said.

Right now, for the masses, there is the CaddyTrek, a robotic golf club carrier that follows a player from tee
to fairway to green through tall grass, up 30
-
degree slopes and

in snow, for as many as 27 holes on a single
charge. Players wear a remote control on their belts, which acts as a homing beacon for the self
-
propelled
cart, which trails six paces behind the player.

Golfers can also navigate the robotic cart, which is m
ade by FTR Systems, to the next tee while they finish
putting.

“Someone ran up to me last week and said that my golf cart had broken free and was rolling through the
parking lot,” said Richard Nagle, the sales manager for CaddyTrek in North America and Eu
rope. “Most
people just stop and stare. They’re not used to this.”

FTR Systems does not disclose the proprietary technology it uses to power the CaddyTrek, which sells for
$1,595, but Mr. Nagle said sales of the robot carriers had been strong, and the com
pany had been rushing
to meet orders in the United States and Europe.

While one robot totes your golf clubs, another, the Polaris 9300xi, could be cleaning your swimming pool.
The blue, four
-
wheel drone submerges in a swimming pool and pushes itself along

the bottom and walls to
dislodge and filter sediment. The device, which is made by Zodiac Pool Systems of San Diego, cleans pools
as much as 60 feet long.

Users can program the robot to clean a swimming pool at regular intervals or use a remote control t
o steer
it by hand. The Polaris 9300xi sells for $1,379.

A silent, four
-
wheeled grass cutter called the Automower
, made by Husqvarna, a Swedish power tool and
lawn care company that also owns the McCulloch and Gardena brands, can care for lawns as large as
6,000 square meters, or 64,000 square feet.

The Automower cuts grass by staying within a boundary wire drawn ar
ound the perimeter, sensing and
avoiding trees, flower beds and other obstacles. The mower, which is sold in Europe and Asia but not in
the United States, cuts rain or shine and returns to recharge itself when its batteries get low. Advanced
models use GPS

and can recognize and return to narrow, hard
-
to
-
reach parts of lawns and gardens,
ensuring that no areas are missed.

The least expensive garden drone, the Automower 305, costs €1,500, or $1,965, and can mow 500 square
meters on one charge. The top
-
end Au
tomower 265AX sells for about €4,600 in Europe and is designed
for hospitals, hotels and commercial properties.

The Swedish company sold its first robotic mower, which was solar
-
powered, in 1995. But the device was
too expensive and too unreliable in clim
ates like that of northern Europe, where sunny summers are not
guaranteed. About five years ago, the Husqvarna switched to battery power, which lowered the cost and
eliminated weather as a factor.

Henric Andersson, the director of product development at H
usqvarna in Stockholm, said the company’s
robotic mowers were getting extensive use in Scandinavia and Europe.

“After being around for years, sales really began taking off about five years ago,” said Mr. Andersson. “The
graph of sales looks like a hockey
stick. The robotic mower has reached a tipping point. More people are
now incorporating the device into their lives.”

Other basic robots are beginning to work inside the home. iRobot, a firm founded by three former
employees at the Massachusetts Institute

of Technology, makes robots that vacuum, sweep and mop
floors. The iRobot Roomba 790, which costs €900 in Europe, is a self
-
propelling vacuum cleaner that can
sense and navigate interior spaces, adjusting by itself from carpets to hard floors, and wieldin
g side
brushes for corners and walls.

The iRobot Scooba 390 cleans sealed hardwood, tile and linoleum floors, no pre
-
sweeping required. The
device looks like a hovering bathroom scale and can hug walls and avoid staircases and other dangerous
drops as it
cleans, vacuums, wet mops and dries as much as 850 square feet of floor on a single charge.
The Scooba 390 sells for €500.

Theoretically, a house full of robotic gadgets can lead to more free time, which is where the AR Drone 2.0
quadricopter, a flying, s
martphone
-
controlled helicopter, may come in. The AR Drone 2.0 is equipped
with two onboard video cameras: one conventional and one high
-
definition, which can stream and store
video of its flights.

The AR Drone 2.0, which the user steers over the helicopt
er’s own Wi
-
Fi network, can be guided through
looping maneuvers and fly as far away as 50 meters at speeds as high as 18 kilometers per hour. The craft
can fly about 12 minutes before needing a recharge. The device, made by Parrot, based in Paris, costs €3
00
in Europe.

Parrot has sold more than 250,000 of the drones since it was introduced in 2010.


How Many Computers to Identify a Cat? 16,000


An image of a cat that a neural network taught itself to recognize.

By
JOHN MARKOFF

Publ i shed: June 25, 2012

MOUNTAIN VIEW, Calif.


Inside
Google
’s secretive X laboratory, known for inventing self
-
driving cars
and augmented reality glasses, a small gr
oup of researchers began working several years ago on a
simulation of the human brain.

There Google scientists created one of the largest neural networks for machine learning by connecting
16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos
, what did Google’s brain do? What
millions of humans do with YouTube: looked for cats.

The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the
researchers will present
the results of their work

at a conference in Edinburgh, Scotland. The Google
scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the
simulation nevertheless surpr
ised them. It performed far better than any previous effort by roughly
doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

The research is representative of a new generation of computer science that is exploiting t
he falling cost of
computing and the availability of huge clusters of computers in giant data centers. It is leading to
significant advances in areas as diverse as machine vision and perception, speech recognition and
language translation.

Although some o
f the computer science ideas that the researchers are using are not new, the sheer scale of
the software simulations is leading to learning systems that were not previously possible. And Google
researchers are not alone in exploiting the techniques, which
are referred to as “deep learning” models.
Last year Microsoft scientists presented research showing that the techniques could be applied equally
well to build computer systems to understand human speech.

“This is the hottest thing in the speech recogniti
on field these days,” said Yann LeCun, a computer
scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New
York University.

And then, of course, there are the cats.

To find them, the Google research team, led

by the
Stanford University

computer scientist Andrew Y. Ng
and the Google fellow

Jeff Dean, used an array of 16,000 processors to create a neural network with more
than one billion connections. They then fed it random thumbnails of images, one each extracted from 10
million YouTube videos.

The videos were selected randomly and that i
n itself is an interesting comment on what interests humans
in the Internet age. However, the research is also striking. That is because the software
-
based neural
network created by the researchers appeared to closely mirror theories developed by biologist
s that
suggest individual neurons are trained inside the brain to detect significant objects.

Currently much commercial machine vision technology is done by having humans “supervise” the learning
process by labeling specific features. In the Google resear
ch, the machine was given no help in identifying
features.

“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead
throw a ton of data at the algorithm and you let the data speak and have the software aut
omatically learn
from the data,” Dr. Ng said.

“We never told it during the training, ‘This is a cat,’ ” said Dr. Dean, who originally helped Google design
the software that lets it easily break programs into many tasks that can be computed simultaneously.

“It
basically invented the concept of a cat. We probably have other ones that are side views of cats.”

The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory
locations to successively cull out general features af
ter being exposed to millions of images. The scientists
said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s
visual cortex.

Neuroscientists have discussed the possibility of what they call the “grandmot
her neuron,” specialized
cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of
an individual.

“You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial
Per
ception, in Palo Alto, Calif.

While the scientists were struck by the parallel emergence of the cat images, as well as human faces and
body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about
drawing parallels between his software system and biol
ogical life.



“A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng.
He noted that
one difference was that despite the immense computing capacity that the scientists used, it
was still dwarfed by the number of connections found in the brain.

“It is worth noting that our network is still tiny compared to the human visual cortex, which is

a million
times larger in terms of the number of neurons and synapses,” the researchers wrote.

Despite being dwarfed by the immense scale of biological brains, the Google research provides new
evidence that existing machine learning algorithms improve gr
eatly as the machines are given access to
large pools of data.

“The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of
magnitude over previous efforts,” said David A. Bader, executive director of high
-
perform
ance computing
at the Georgia Tech College of Computing. He said that rapid increases in computer technology would
close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex
may be within reach before the en
d of the decade.”

Google scientists said that the research project had now moved out of the Google X laboratory and was
being pursued in the division that houses the company’s search business and related services. Potential
applications include improvements to image search,

speech recognition and machine language
translation.

Despite their success, the Google researchers remained cautious about whether they had hit upon the holy
grail of machines that can teach themselves.

“It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but
my gut feeling is that we still don’t quite have the right algorithm yet,” said Dr. Ng.









Talking Head

‘How to Build an Android,’ by David
F. Dufty


“How to Build an Android” is the honest title of an earnest book, the first by David F. Dufty, a senior
research officer at the Australian Bureau of Statistics. It explains how a team of researchers at the
University of Memphis collaborated in 20
05 with an artist and robotics expert, David Hanson, to create
what was then the most sophisticated android anywhere, a replica of the science
-
fiction writer Philip K.
Dick.


HOW TO BUILD AN ANDROID

The True Story of Philip K. Dick’s Robotic Resurrection

By David F. Dufty

If you have heard of him, you probably know that he is missing, or at least his head is. It disappeared in
December 2005, when Hanson was flying from Dallas to San Francisco to show Phil off to Google. Hanson
changed planes in Las Vegas,

but left Phil’s head in a carry
-
on bag in the overhead bin. He didn’t realize
what he had done until he got to San Francisco. The bag continued on to Orange County, and has never
been recovered.

Where did Phil go? To many people the disappearance sounded

like something out of Philip K. Dick,
whose lurid, drug
-
enriched work inspired Hollywood’s dark science
-
fiction thrillers “Blade Runner,”
“Total Recall” and more. He wrote a lot about artificial intelligence, impenetrable conspiracies and
androids going m
issing. And he had lived in Orange County until his death in 1982. Did Phil decide to go
there on his own? Was he stolen, or did he escape? What had he been thinking?

Dufty admits right away that nobody knows. He goes searching anyway, visiting the wareho
use in
Scottsboro, Ala., where the nation’s unclaimed baggage goes to die. He wanders among the miles of
abandoned toiletries, electronics, T
-
shirts and toys: Nope, no Phil.

This is the point where a storyteller might be lured toward the paranoid or paranormal. But all Dufty
wants to do is tell what he says is the all
-
true back story: Who was Phil, and how did he come to be?

Dufty was a postdoctoral fellow then at the Institu
te for Intelligent Systems at the University of Memphis,
where he worked closely with the scientists building Phil. His reconstruction through interviews with the
participants is an appealing depiction of brilliant minds dreaming big on shoestring budgets


particularly Hanson, a skilled sculptor whose company, Hanson Robotics, had been pushing the frontiers
of android making for years, and Andrew Olney, a programmer whose job was to give Phil the spark of
artificial intelligence: the ability to recognize a
nd convincingly respond to human speech.

Phil had Dick’s face, sculptured from photographs using a spongy, skinlike polymer called Frubber. With
motors and cables as his facial muscles, his mouth moved when he talked. He made faces, and met a
visitor’s ga
ze. He had Dick’s own clothing, provided by the author’s family. The clothes hung on an
inanimate mannequin; this android was advanced but not
that

advanced. The sum total of Phil’s animate
presence was in his head.

For he had Dick’s brain, or at least the closest that Olney and his collaborators could assemble using the
best
early
-
21st
-
century technology


software that combed through an immense database of Dick’s own
words as expressed during his lifetime in books
and interviews, and shaped it into speech.

It wasn’t perfect


even a writer as well known and talkative as Dick did not leave enough recorded traces
of himself to allow an android imitator to even begin approaching the vast totality of a human mind. Phil

could spit out an accurate Dick answer to a specific question if it found a match. If it didn’t, Olney’s
solution was to program Phil to improvise, to spin related words into phrases in a way that (he hoped)
sounded coherent.

Phil was also given canned r
esponses to predicted exchanges, like this:

Q. What are you?

A. I am Phil, a male Philip K. Dick android electronic brain, a robotic portrait of Philip K. Dick, a computer
machine.

The sum of these parts


that was Phil. He was a dazzling blend of techn
ology and art. He was also erratic,
as you might expect any first
-
generation android to be. Unexpected questions and loud noises threw him
off. Androids have a hard time responding to human speech cues, knowing when to answer and when to
stop. Sometimes Ph
il would get into a self
-
perpetuating conversational loop. His handlers


who
monitored his responses on a computer screen


had to keep a close eye.

Once someone asked Phil what he thought of “Blade Runner.” He started talking about commercializing
lite
rature and merchandising rights. Then he kept talking, and talking, as Hanson watched the dialogue
monitor with alarm: “There seemed to be a large amount of output waiting in the buffer, and it was
growing larger every second.” Phil wasn’t going to shut up
. Hanson cut off Phil’s mike, so he seemed to
stop, though his lips kept moving and the words kept (silently) flowing.

Dufty provides an exhaustive understanding of how to build an android, but seems to have missed some of
the
memos on how to build a boo
k. His prose has the curiously flat quality of computer
-
generated speech,
and his flights of insight and imagery are too often earthbound. He has a lot of technical ground to cover,
but his narrative tends to unfold with the dread linearity of PowerPoint
slides.

This technical
-
manual approach sometimes slows the drama, but Dufty finds it where he can. He is hip to
campus office politics and the way egos rub together, and the suspenseful anxiety of grant applications.
When a Chicago Tribune reporter writes

about Phil’s celebrated appearance at a technology expo and
mentions Hanson Robotics but not the whole Memphis team, the hurt feelings are worth a couple of tense
pages.

But the character who ends up being most intriguing is poor nonhuman Phil, whose unk
nown destiny
gives the book a tinge of sorrow. Tied as he is to the life and words of a deeply troubled and testy
namesake, he can be churlish at times, giving snotty answers to simple questions. A Wall Street Journal
writer said of Phil, “The most advance
d robot on exhibition was also, in my view, the most obnoxious.”

That’s hardly surprising. But one mystery is why Phil’s creators never gave him an answer to a question
you would expect anyone to ask Philip K. Dick. It’s the title of the story that inspir
ed “Blade Runner”: “Do
androids dream of electric sheep?”

As Dufty explains, no one ever thought of it. When the question inevitably arrived, Phil’s unscripted reply
began:

“Yeah, exactly. But I couldn’t explain that feeling. I just couldn’t give an expl
anation of it. But um, well, I
remember one dream I had. In the dream Tess and I were in the kitchen on a high stool, we’d found a
cereal box and on the back of the cereal box is extremely valuable information directed at us, and we were
both reading it. A
nd the relationship between that and ‘Ubik’ is an obvious one.”

Poor Phil. He was a little nutty, but he was A.I.’s pioneer. Hanson has since built a more advanced version,
minus the programming. Huge corporations are using their computing power and money

to make big
gains in artificial intelligence. Apple has devised a “personal assistant” for the iPhone, Siri, that gives eerily
conversational answers to plain
-
speech questions. She’s very good, and supposedly will get better.

But this is how Siri handles

the question:

Q. Do androids dream of electric sheep?

A. I found three livestock services a little ways from you.

I like to think Phil could do better, given another chance.