Roboethics and Telerobotic Weapons Systems

electricfutureAI and Robotics

Nov 14, 2013 (3 years and 7 months ago)

60 views





Abstract


A technology is used ethically when it is
intelligently controlled to further a moral good. So we can
easily extrapolate that the ethical use of telerobotic weapons
technology occurs on
ly when that technology is intelligently
controlled and advances a moral action. This paper deals with
the first half of the conjunction; can telerobotic weapons
systems be intelligently controlled? At the present time it is
doubtful that these condition
s are being met, I suggest some
ways in which this situation could be improved.


I
NTRODUCTION

technology is used ethically when it is intelligently
controlled to further a moral good. The philosopher
Carl Mitcham explains that the intelligent control of
technology requires:

(1) Knowing what we should do with technology, the
end or goal toward which technological activity ought
to be directed; (2) knowing the consequences of
technological actions before the actual performance
of such actions; and (3) actin
g on the basis of or in
accord with both types of knowledge

in other
words, translating intelligence into active volition [1].

So we can easily extrapolate that the ethical use of
telerobotic weapons technology occurs only when that
technology is intellige
ntly controlled and advances a moral
action [2]. This paper will not attempt to decide the question
of when warfare is just or ethical, for now it will be assumed
that there are at least some cases where it might be. Instead
we will look at the first hal
f of the conjunction and decide if
telerobots can indeed be intelligently controlled in the
manner that Mitcham requires. At the present time it is
doubtful that these conditions are being met, I suggest some
ways in which this situation could be improved
.

I.

P
ROBLEMS WITH THE INT
ELLIGENT CONTROL OF
TELLEROBOTIC WEAPONS

SYSTEMS

A.

Telepistemological Distancing

Telerobotic systems change our way of seeing the situation
that the controller is trying to navigate the robot within.

The first insufficiently addressed

effect of telerobotic
weapons systems is telepistemological distancing

the
removal of the operator from the location of military activity.
The main function of military robotics is to extricate

Manuscript received February 28, 2009. This work was supported in
part by a grant from Sonoma State University.

J. P. Sullins is an Assistant Professor of P
hilosophy at Sonoma State
University, Rohnert Park, CA 94928 USA (phone: 707
-
664
-
2277; fax: 707
-
664
-
4422; e
-
mail: john.Sullins@sonoma.edu).

precious human agents from the direct harm encountered on
the

battlefield.

There are at least two distinctive types of robots used for
this purpose: autonomous robots and telerobots.

Telerobots
are to be distinguished from autonomous robots in that a
telerobot has one or more human agents who have some
direct contr
ol over the activation of several, or all, of the
systems motors and actuators in the machine. For instance,
the famous NASA Mars Rovers are telerobots in that they
receive commands from their controllers on Earth and then
they execute those commands. Thus
, telerobots fit in a range
of machines that begin with direct radio control where a
remote operator makes every choice for the machine’s
operation, all the way to semi autonomous machines that are
only under intermittent direct control.

Fully autonomous

robots would have little direct input
from their owner/operators and would have to make
important decisions on their own. Today, it is arguably not
the case that there are any such things as fully autonomous
robots equivalent to human or even animal natur
al agents.
The machines that may be developed in the future which
might have robust AI and Alife functions are fascinating to
contemplate and raise many intriguing ethical issues [2]
-
[5].
However, currently, they are not being deployed to a
battlefield so
I will not cover their ethical status in this paper.

The autonomy of telerobots is subtle. Even though there
are obvious human operators interacting with the machine
during its operation, there are a great deal of autonomous
systems in the machine over wh
ich the operator has minimal
control, in some cases the operator may only have an abort
-
action button [6]. When a robot is used on a battlefield, the
hostile nature of this situation will not give the operators the
luxury of having the time necessary to m
ake deliberate
decisions about the robot’s behavior and have it accomplish
its mission, thus this fact will drive the tendency for more
and more autonomy

even in ostensibly human controlled
telerobots [7].


The operators of telerobots will see the world a
little
differently while controlling the machine and this may
impact their ability to make ethical decisions. When one is
experiencing the world through the sensors on a robot one is
experiencing the world telepistemologicaly, meaning that
they are buildi
ng beliefs about the situation that the robot is
in even though the operator may be many miles away from
the telerobot. This adds a new wrinkle to traditional
epistemological questions. In short, how does looking at the
world through a robot color one’s be
liefs about the world?


Epistemology is no trivial subject and this paper is not
Roboethics and Telerobotic Weapons Systems

John P Sullins

A



meant to be a full treatise on the subject. But suffice to say
that a useful epistemology will provide some sense of
assurance that the propositions one believes about the wo
rld
are true and useful to the agent that possesses them. As
anyone who has studied even a little philosophy will know,
or for anyone who has tried to program autonomous robots
can attest, this trick turns out to be fiendishly difficult. Even
just getting

a robot to recognize a soda can in a noisy
environment is tough, which begs the question of how does a
human agents accomplish this same task? If we move the
robot out of the lab and onto a battlefield, then we are now
not just looking for innocent soda c
ans but for enemy agents
who are actively trying to deceive it and added to all this we
have to distinguish between the enemy and friendly or
neutral agents who are also present at the scene, then we
must realize that this is obviously a monumental problem

that will tax our telepistemological systems to the limit.

Thus, the first requirement for the intelligent control of
telerobotic weapon systems must be that the view of the
world that the robot provides to its operators must be one
that is epistemologic
ally reliable. In order to be successful,
let alone ethical, a telerobotic weapons system must provide
a telepistemological view of the situation that it is in, which
is accurate enough that given: some agents
A

(a military
telerobot/human team) the telero
bot provides true knowledge
of an event, meaning that some proposition P (E.G. There is
an enemy in that house and there are no civilians in the
house) is believed to be true if and only if that proposition is
indeed true. Even in a noisy environment fille
d with smoke
and low light, the Agents
A
are provided with accurate and
meaningful information allowing for the operators to use the
telerobot effectively to advance some ethically positive
course of action. The agents can’t just believe P to be true
by p
ure luck, gut feeling, or happenstance; there have to be
good reasons to support the belief.
1

We might begin by noticing that many mundane
technologies have a similar function. For instance,
binoculars or other vision aids must provide a soldier with
accu
rate telepistemological information about the world.
How does a soldier know what she sees through the
binoculars is a true representation of the world? She knows
what she sees is true because the binoculars operate under
the physical laws of optics and sh
e knows, or can know, the
laws of optics. This is a reliable chain of causes so the
soldier is justified in believing what she sees.

The question now is whether or not a telerobot provides a
reliable chain of causes for its operator(s). The answer is not

as simple as it was for the optical binoculars. This is due to
the fact that the images the operator(s) see on their screen as

1

I am well aware that this is a quick gloss over the Reliable
-
Indicator
theory of epistemology and that there are
well know paradoxes that can
occur, such as beliefs that ensure their own truth self referentially. That
detail is unimportant here and for a full exploration of this point I refer the
reader to:
Belief, Truth and Knowledge
, by D.M. Armstrong, London:
Cam
bridge University Press, 1973.

they operate a telerobot are enhanced or altered
computationally which may be epistemologically suspect [8].
Also, we have igno
red the fact that even with the simple
binoculars, once the images enter into the mind of the
operator or soldier, myriad social, political, and ethical
prejudgments may color the image that has been perceived
with epistemic noise.

We can now see that th
ere are two loci of epistemic noise;
1) the technological medium the message is contained in and
2) the preconditioning of the agent receiving the message.
Let’s now look at each of these preconditions as they apply
to telerobotic weapons systems.

1) To

know “that P” about some remote location through a
technological medium, there must first be an actual fact to be
known, and the receiving agent has to believe it to be true
and along with all this the process by which the agent
acquired their belief shou
ld be accurate so that if the fact
were not true, then the agent would no believe it.

Is it reasonable to believe that telerobotic weapons now in
use actually fulfill this requirement?

Today many of the Telerobots in use are small drone
aircraft. Flying th
em is not easy due to the telepistemological
difficulties of

the pilot developing an accurate situational
awareness. P. F. Singer reports in his book “Wired for War”
that: “

The use of drones has increased significantly…There
are so many UAV’s buzzing abo
ve Baghdad, for
instance, that it is the most crowded airspace in the
entire world, with all sorts of near misses and even a
few crashes. In one instance an unmanned Raven drone
plowed into a manned helicopter [9].

As this examples shows, there is reason
to be worried
about the efficacy of the telepistemological value of current
technologies already in use. Unless this changes there is
little hope of controlling these weapons intelligently and thus
diminished chances that they can be used ethically.

2) T
he is other important location of epistemic difficulty
can be found in the preconceived notions that the operators
of telerobots bring to the equation. The most important
factor we must focus on now is what I will call
distancing
.
The operators of these
machines are typically many miles,
sometimes thousands of miles, away from the military action
that the telerobots are accomplishing. This can be seen as a
moral good that the machines provide, in that the operator
may be very safe from harm. But this mor
al good has a few
unfortunate consequences, one of which is that it makes the
use of military force more likely. Given that few, if any
ethical theories consider the state of war a moral good,
anything that propagates war instead of seeking its speedy
end

cannot be used ethically. Telerobots provide an
opportunity for military adventures that cost fewer lives thus
helping make these actions politically palatable. Distancing
also helps facilitate political arguments that propagate the
impression that moder
n warfare can be a surgical affair. The
compelling videos of these machines in action also help


foster the idea that warfare can be clean and surgical.

These images provide compelling evidence that we can
eliminate our enemies from the air with ease and
precision, fostering an illusion of military omnipresence
and omnipotence. The videos we see are highly
selective and focus on the big successes. This selective
sample will obviously result in a skewed opinion of the
technology [10].

The perceived accuracy

and omnipotence of military
operations provided by distancing is obviously a powerful
disincentive towards accountability and media scrutiny of
military affairs because this technology provides video of its
own operation, the ultimate imbedded reporter. T
he most
compelling of these videos can be selected for release to the
public, assuaging any arguments that are critical of violent
political action. Without this scrutiny it will be more likely
for wars to be waged, which is obviously not an ethical
outcom
e.

B.

The Normalization of Warfare

Telerobots contribute to the acceptance of warfare as a
normal part of everyday life.
As a consequence of the
distancing that telerobots provide,

there is a growing
tendency for the operators to be located great distances fr
om
the field of battle, sometimes even thousands of miles away
[11], [12]. In fact, one of the major bases of operations for
the US Air force’s unmanned aircraft is located just outside
Las Vegas. The pilots of these aircraft commute to work and
then ope
rate telerobots on military missions for a three hour
shift, after which they return home to their normal lives [13].
For these pilots, fighting the war is just a normal part of their
lives. Is there something ethically wrong with the
normalization of war
fare and the creation of shift
-
work
military telerobot operators? The problem is that operating
one of these machines is not just any old job; it is a job that
requires the use of deadly force and the witnessing of the
effects of that force on a regular ba
sis. Imagine the mental
gymnastics required to compartmentalize one’s life to be at
war one moment and then, a few hours later, to be watching
TV with one’s family. To use this technology ethically, at a
minimum we must be certain that we are not psycholo
gically
damaging the operators of these machines or their friends
and family.

Regardless of the ethical status of these machines,
politically, there would be a strong motivation to pursue
extending this practice to as many military operations as
possible.
If this were to be done, then there would be far
fewer casualties. In fact, there would not even be all that
much of a lifestyle loss for the military personnel that
operated these telerobots. As long as the wars that are
propagated with this technology re
main targeted at countries
that cannot retaliate in kind, then these wars might go almost
unnoticed by the general public [14].

As these systems become more autonomous and less
technically demanding to fly or operate, then the need for
military profession
als to operate them will diminish and the
job will fall to other people. There is already a tendency for
unmanned aircraft to be flown by younger enlisted men
rather than drawing pilots from the typical pool of trained
officers [15].

These trends will be
likely to distort the special ethical
terrain that warfare inhabits. If the conduct of warfare
becomes equivalent to a day at the office, then we might lose
interest in its speedy conclusion. Again this suggests that
telerobotic weapons resist intelligent
use and as they stand
propagate unethical situations.

C.

The Perceived Antiseptic Layer of Telerobotics

Telerobots contribute to the myth of surgical warfare
and limit our ability to view one’s enemies as fellow moral
agents.
Distancing and the special tele
pistemological
problems associated with telerobotic weapons systems are
designed to place an impenetrable barrier between the
aggressor and the targets of that aggression.

Telerobotic weapons systems place a tremendous
antiseptic layer of technology betwe
en the combatants
that may help each side to dehumanize the other. The
operator of the machine will see his or her enemy as
little more than thermal images on a screen and the
human targets of these machines will see only the
teleoperated mechanical weapon
s of his or her foe. This
type of warfare could intensify the hatred that is already
fostered by current modes of armed conflict [16].

This type of warfare is likely to produce a disregard for
the moral agency of one’s enemies and may even foster a
deeper

hatred than that already caused by current modes of
armed conflict. Nearly every ethical theory demands that
moral agents must be given special regard, even when they
are one’s enemy. If a Just War is possible, it can only be
fought in a way that seeks
to reach a quick end to the
conflict, treats enemy soldiers as a moral agents and get both
sides of the conflict back to the point where they can respect
one another’s moral worth. Telerobotic warfare will make
this much more difficult to achieve. Alread
y, the victims of
the many telerobotic attacks that have occurred over the past
few years have expressed their belief that these weapons are
cowardly and that the weapons are also inflict devastating
civilian casualties [17]. Whether or not these perceptio
ns are
true, they are the image that telerobotic weapons cultivate.

If telerobotic weapons are enhancing intergenerational
hatred between peoples, then they are not a technology that
is being intelligently controlled and their use is unethical.

II.

M
ITIGATIO
N
S
TRATEGIES

So far we have seen some very serious problems that
block the intelligent control of telerobotic weapon systems,
which is a necessary condition for their ethical use. I must
admit that this is not a universally held claim. Ronald Arkin
argue
s that it is possible to develop these systems in ways
that actually enhance the possibility for just conduct in


warfare and has presented his arguments in a technical report
funded by the U. S. military [18]. Wendel Wallach and
Collin Allen argue that fo
r pragmatic reasons that, “…if the
proponents of fighting machines win the day, now will be the
time to have begun thinking about the built
-
in ethical
constraints that will be needed for these and all (ro)botic
applications” and they have offered some idea
s on how to
accomplish this [19]. Michael and Susan Anderson also
argue that machine morality is of paramount concern and
have offered some ideas on how to program ethical decision
making [20]. And there are many more books and papers
being written on the

subject. Still, Many of these efforts are
centered on autonomous robots, and little focus has been
paid to the much more common telerobotic systems. It is
easy to think that since a moral agent is controlling the
telerobot, then the telerobot’s actions
must be moral, but this
is a suspect argument. This is due to the fact that the design
of telerobots limits our ability to intelligently control them,
whether or not the controlling agent is a moral agent.

Telepistemological distancing has been shown to

be the
root cause of many of the issues, yet this very same
distancing also has an ethically positive ability to reduce
casualties for at least some of the combatants. It is hard to
say whether the positive outweighs the negative.

It may be impossible
to remove all of the negative factors
surrounding telepistemological distancing but there might
also be ways of mitigating their most pernicious effects.
These are my recommendations.

1)

Constant attention must be paid to the design of the
remote sensing cap
abilities of the weapon system. Not
only should target information be displayed but also
information relevant to making ethical decisions must
not be filtered out. Human agents must be easily
identified as human and not objectified by the mediation
of th
e sensors and their displays to the operator. If this
is impossible, then the machine should not be operated
as a weapon.

2)


A moral agent must be in full control of the weapon
at all times. This cannot be just limited to an abort
button. Every aspect of
the shoot or don’t shoot decision
must pass through a moral agent. Note, I am not ruling
out the possibility that that agent may not be human. An
artificial moral agent (AMA) would suffice. It is also
important to note that AMAs that can intelligently m
ake
these decisions are a long ways off. Until then, if it is
impossible to keep a human in the decision loop, then
these machines must not be used as weapons.

3)

Since the operator his or herself is a source of
epistemic noise, it matters a great deal whet
her or not
that person has been fully trained in just war theory.
Since only officers are currently trained in this, then only
officers should be controlling armed telerobots. If this is
impossible, then these machines should not be used as
weapons.

4)

These

weapons must not be used in any way that
normalizes or trivializes war or its consequences. Thus
shift
-
work fighting should be avoided. Placing
telerobotic weapons control centers near civilian
populations must be avoided in that it is a legitimate
mili
tary target and anyone near it is in danger from
military or terrorist retaliation.

5)

These weapons must never be used in such a way
that will prolong or intensify the hatred induced by the
conflict. They are used ethically if and only if they
contribute t
o a quick return to peaceful relations.




A
CKNOWLEDGMENT

I would like to thank my students and colleagues for their
many stimulating discussions on this topic, which helped
formulate my thoughts as they are expressed in this paper. I
would also like to p
articularly thank George Ledin who has
graciously supported my efforts in this field of study and
who has given me many opportunities to present my ideas to
his students and colleagues in computer science. Finally I
would like to thank my research assista
nt Jennifer Badasci
who was instrumental in the completion of this work.

R
EFERENCES

[1]

C. Mitcham,
Thinking Through Technology: The Path between
Engineering and Philosophy
. Chicago: University of Chicago Press,
1994.

[2]

J.P. Sullins, “
Telerobotic weapons system
s and the ethical conduct of
war
,” in
The American Philosophical Association Newsletter on
Computers and Philosophy
, submitted for publication, 2009.
Available:
http://www.apaonline.org/publications/newsletters/index
.aspx

[3]

J. P. Sullins, “
Ethics and artific
ial life: From modeling to moral
agents

,“ in
Ethics and Information Technology
, Vol 7, 2005, pp139
-
148.

[4]


J. P. Sullins,
When is a Robot a Moral Agent?,
In
International
Review of Information Ethics,
Vol
.

6: December 2006. Available:
http://www.i
-
r
-
i
-
e.net/inhalt/006/006_Sullins.pdf

[5]

J. P. Sullins, “
Friends by Design: A Design Philosophy for Personal
Robotics Technology
,” In
Philosophy and Design: From Engineering
to Architecture
, P.E Verm
aas, P, Kroes, A. Light, S.A. Moore, (Eds.).
Springer,.2008, pp 143
-
158.

[6]

J. P Sullins, “
Artificial Moral Agency in Technoethic
s”, in
Handbook
of Research on Technoethics,

R. Luppicini, R. Adell, (Eds), Idea
Group Inc, 2008, pp 205
-
221 .

[7]

R. Arkin,
Governing

Lethal Behavior: Embedding
Ethics in a Hybrid Deliberative/Reactive Robot
Architecture
, Technical Report GIT
-
GVU
-
07
-
11, 2007,
p 57. Available:
http://www.cc.gatech.edu/ai/robot
-
lab/online
-
publications/formalizationv35.pdf

[8]

A. Goldman,
Telerobotic Knowledge
: A Reliabilist Approach
, in
The
Robot in The Garden: Telerobotics and Telepistemology in the Age
of the Internet
, K. Goldberg Ed., Cambridge Mass: MIT Press, 2001,
pp. 126
-
143.

[9]

P.W. Singer,
Wired For War ,
New York:

Penguin

Press HC, 2009, p
202.

[10]

Ibid [2]

[11]

Ibid [9]



[12]


M. L. Kelly,
The Nevada Home of the Predator Aircraft
, National
Public Radio, All Things Considered, September 16, 2005 ,
Available:
http://www.NPR.org


[13]

G. Knapp,
Predator UAV 'Battle Lab' Just North of Las Veg
as

(2005). Available
: http://www.klas
-
tv.com/Global/story.asp?S=3001647CBS

[14]

Ibid [2]

[15]

Ibid [9]

[16]

Ibid [2]

[17]

Ibid [9]

[18]

Ibid [7]

[19]

W. Wallach and C. Allen,
Moral Machines

: Teaching Robots Right
from Wrong
, Oxford

: Oxford University Press, 2009, p 21.

[20]

M. Anderson,
S. Anderson, C Armen,

An Approch to Computing
Ethics.

In
IEEE Inteligent Systems
, 2006, pp56
-
63.