Read Now - MCIS

awfulcorrieAI and Robotics

Oct 29, 2013 (3 years and 9 months ago)

98 views







Title… no idea

Anthony Gusman

ENVS
320

Dr
.

Fitch

November 18
,

2010



Gusman
2


A
s animals
,

humans have evolved into the dominant species in the world
.

The main
factors which allowed us to reach this status have been opposable thumbs
,

and the ability of our
brains to create self
-
awareness and reason
.

With reason and self
-
awareness
,

we have advanced
our capabilities
to solve problems through deduction and increased problem
-
solving skills
.

Aside
from problem
-
solving and deduction
,

there
is a vast expanse of study on the human mind
,

and
never
-
ending research and theory behind trying to explain how it functions
.


The human conscience and the science behind the mind and brain functions
have

always
been changing the way we judge how humans fu
nction and behave
.

Neuroscience and brain
mapping can only give us a general idea of what part of the brain is functioning at certain times
,

but
it is not yet possible for us to map out the human brain and each neuron’s function
.

With
that
,

it’s impossible

to replicate the mind and simulate full consciousness
.

What if we could do
that; what if it was possible to create an artificial intelligence that could learn and think? Not
only that
,

what if it was possible to create such an artificial intelligence that

also functioned like
a file of computer information?


While artificial intelligence has been in e
xistence for some time now
,

it ha
s still
been
limited to only perform the functions it was created to serve
.

Early computers
were able
to do
calculations
,

the

first one being the
Electronic Numerical Integrator
and

Computer
,

or ENIAC
.

The ENIAC was first built to calculate ballistics firing tables for the army in the 1940s during
World War II
.

Since then
,

the rate of increase in our technology’s computing capability is
starting to approach the concept of technological singularity
,

in which technological
Gusman
3


advancement has become so a
ccelerated

that computing ability in machines and technology
surpasses the hu
man brain
.

Ray Kurzweil has repeatedly stated that this coming singularity will
be approached by 2045
,

in which computers will be faster
than the human brain and be able to
communicate at
electronic speeds
,

which are much

faster t
han the human language (Ku
rzweil)
.


The increase in technological computing power has been exponentially increasing
,

and
even the exponent to which the increase is occurring is itself increasing as well
.

Since
Kurzweil’s undergraduate study
,

computers have increase a billion fold i
n power and speed
.

By
2029
,

Kurzweil claims “we will have reverse engineered and modeled all parts of the brain
.


This means that computers will have the ability to function the same way as the cerebellum and
cerebral cortex
,

along with other regions of th
e brain
.

Along with that
,

Kurzweil says that in time

computers

will contain

the algorithmic capability of

developing even
the s
uppleness of human
emotional intelligence
.

This will make computers far more powerful than the human brain

(Kurzweil)
.

Perhaps th
e concept
s

of artificial intelligence

and the replication of the human mind
aren’t

so far away
,

should this foresight prove true
.



While the aspect of computing power does not appear to disprove the possibility of
artificial intelligence
,

there are still other factors that prevent it from reach such power
.

Even if
the brain were to be mapped out
,

that still doesn’t necessarily mean a computer will be able to
function in the same manner as a human would
.

The brain’s activity can be monitore
d
,

but it
cannot be predicted what will happen next
.

Not unless psychology and neurosciences are applied
and perfected
.

The biochemical functions can be studied to explain why a human feels emotion
Gusman
4


or
makes an impulsive decision
,

but those functions have n
ot been reverse
-
engineered to predict
such action
.

The human mind is very complex
,

and while it can be mapped out
,

the map will
only be of one brain
.

Each human is unique and their different genetic information means
different trends in almost every way po
ssible
.

If a truly human
-
like AI were to be made based off
a human
,

it would first need to be made off of one individual
.



With that in mind
,

there comes an ethical question on how
one’s mind can be mapped
,

or
should it be at all
.

There’s plenty of contro
versy as to whether or not experimental

human
cloning should
,

so there’s likely going to be controversy on whether
the human mind should be
replicated
.

There’s also the scientific dilemma of how to actually obtain an AI if it’s to be
replicated/taken from
a person
.

Should a person’s mind be mapped and

moved to a piece of
technology
,

o
r perhaps implanted
into another human body?

Also
,

if a mind can’t be duplicated
,

a theoretical way to make more would be to split it
.

This has to deal with Freudian concepts o
n
the mind
,

in which there are different parts of it that function differently
.


In the online machinima series Red vs
.

Blue
,

an AI was made based off of one person’s
mind
,

known as The Director
.

In a military experimental project called Project Freelancer
,

the
objective was to create physically augmented soldiers with capabilities to provide tactical
advantages
.

Each test subject was given an AI that was part of their body armor and fused into
their mind
,

and the AI regulated the use of the special armor ability
.

For example
,

the Omega AI
was given to Agent Tex
,

in which she was capable of using a special cloaking device
.

Agent
Wyoming

had a time distortion unit that was regulated by the Gamma AI unit
.

The

AI
s

were
Gusman
5


needed for regulation to prevent
a misuse of
the special armor ability

and to not overuse the
soldier’s energy or the power in their suit
.

Such misuse was shown when the character Grif used
enhanced speed capabilities
,

in which he ran at incredib
ly fast speeds
,

only to pass out later from
overexertion
.

There doesn’t
appear to be controversy in P
roject

Freelancer
,

until an extensive
investigation is put in place b
y the higher military authority (Red vs
.

Blue)
.


Project Freelancer’s main AI was know
n as the Alpha
,

which all the other AI spoke of
,

but never knew of its existence
.

It was waved off as a false prophecy or a myth
.

It’s later
revealed that The Director subjected himself to the Project
,

and used his mind to create the
o
riginal

AI
.

Since his

mind couldn’t be replicated
,

they had to split

or fragment

it
.

He was
subjected to torture and pain to the point of insanity
,

until the mind was fragmented
, similar to
reverse
-
engineering a split
-
personality disorder, so to speak
.

Certain portions of the
mind were
broken apart systematically in order to protect itself
.

The Delta AI was the portion of the mind
which was logic
,

in order for the Director to not comprehend or understand what was happening
.

Gamma was deception
,

and Omega was the anger or rage f
unction of Director’s personality
.

The
Epsilon AI contained all the memory
,

which was fragmented so the Director wouldn’t remember
what he was subjected to
.

Later
,

as
questioned earlier
,

the Epsilon unit is transferred from a
storage unit to a mechanical r
obot
,

essentially recreating the former character in the series
,

though he doesn’t recall any past memories the original character experienced
.



Epsilon’s disturbing memories caused the AI to break down
,

which caused severe mental
problems for Agent Washington
.

During the Recollection series of Red vs
.

Blue

after multiple

Gusman
6


similar

incidents occur
,

an investigation ensues questioning the ethics of the project

once serious
complications began to arise betwee
n the agents and their AIs
.

Was it moral for The Director to
torture a mind in such a way? Did it matter that it wasn’
t in a human body but

that it still felt
pain?
While the Red vs
.

Blue concept of artificial intelligence isn’t the only possibility

of how

it
might work
,

it does raise questions about how an AI would function and react to certain
situations
.


Along with the moral question behind AI, the Red vs. Blue raises question of the
psychology of AI.
Would it follow Freudian concepts on the

unconscious

mind
,

in which there’s
an id
,

ego
,

and superego?

Can memories or functions

of stressful situations

be repressed, or
would th
at

just be considered lost data.

Can an AI construct possessing a robotic body get an
aneurysm (comedically
brought up during a non

sequitur tangent putting the original Alpha AI
under stress) (Red vs. Blue)?

Would the trends of the algorithms that determine its function
change over time
,

mature
,

so to speak?

Looking along the Isaac Asimov theory of AI
,

his stories
involved the evolut
ion of artificial intelligence
,

in which new codes would form as anomalies
.

In
the I
,

Robot film
,

a similar character depiction as the creator of a robot program hypothesized
that robots would eventually have dreams and thoughts of their own
.



Looking at
a different perspective of self
-
aware AI
,

what boundaries will they have that
will limit their actions? Would they be coded with a certain set of rules
,

such as The Three Laws
of Robotics written by Isaac Asimov in the 1940s? Could such rules be interprete
d differently
and possibly broken? The Three Laws were cited in the I
,

Robot
novels
,

as well as the 2004 film
.

Gusman
7


In the 2004 film
,

the Three Laws are used by a central AI personality
,

Viki
,

who analyzes them
and uses the laws and observations of humans as justification

to create a martial law in the city
setting of the movie
.

Their justification was pointing out the flaws of humanity
,

in which our
constant violence becomes
reason for a diffe
rent species
,

in this case AI robots
,

are needed to
protect us from ourselves
.

Using such logic can be reasonable
,

but can be a
risk;

as such a
protective body can be dangerously efficient in harming humanity should it be corrupted
.

An
example of one such
protective body is the race of robots in The Day the Earth Stood Still
,

created to prevent aggression between all outer space faring worlds
.



Switching

from

the hypothetical experimental uses for this advanced concept of AI
,

let’s
look at what usually hap
pens to new technology when it goes through defense departments:
civilian use
.

What sort of impact would self
-
aware AI have on society and everyday life?
Technology today is already becoming “smarter
,
” in which applications are used to make tasks
easier
,

a
nd even adapt to changes in the environment to increase efficiency
.

The “smart
highway” system has been through experimental use in the United States
,

and
is in use in parts of
San Diego
.

The system analyzes traffic patterns and gives warning or alternativ
e routes for
drivers to avoid congestion; however this system only serves that one single function of warning
for drivers (Wenger
,

Opiola
,

and Tony)
.

What if an artificial intelligence unit was capable of
doing more than just
analyzing a situation and acti
ng accordingly? Would it be possible for an AI
to be sentient enough to predict the motivation of someone’s actions and interfering as means of
protection?

Gusman
8



In the video game Halo 3: ODST
,

one such example of a pseudo
-
sentient AI is depicted

throughout the

main campaign
.

In the distant future
,

Earth is attacked by an alien race and the
futuristic city of New Mombasa
,

Kenya
,

and a specialist group of soldiers
,

Orbital Drop Shock
Troopers (ODSTs)
,

drop in one
-
manned pods from low orbit

down

to the Earth’
s
surface
.

After
waking up from a crash landing due to an unexpected explosion in the city
,

the primary
gameplay consists of the player wandering in the overrun city piecing together events that
happened 6 hours earlier the same day
.

The city’s main AI
,

the
Superintendent
,

warns the player
of danger ahead by subtle messages
,

such as using traffic signals indicating danger ahead or
giving detour routes to take to avoid harm
.

A subplot that can be discovered is a separate
program that’s also part of the city’s
AI called Vergil
.

As previous audio transcripts are gathered
,

it’s discovered the program was created by a man for the purpose of protecting his daughter
.

As
previously asked
,

the program shuts down a train she is on because she is going to sign up for the

military
.

The deduction that this puts her in unnecessary harm prompted Vergil to intervene and
stop her from going through with the plan
.

In this scenario it’s unsure whether the AI is sentient
,

or if the program is so advanced it is able to almost repli
cate the reactions of a sentient mind
.

One such variation of AI doesn’t seem impossible in the future, considering the previously stated
computing capability increases.


Looking at a previous concept
,

in ODST
,

a member of an alien species codenamed
Enginee
rs works his way into the city’s AI storage facility and is able to combine the city’s
observation
,

and t
ake it as his own knowledge
.

If humans were able to connect their minds to
Gusman
9


such knowledge

so quickly and at such great volumes
,

human intelligence woul
d become part of
the singularity event
,

in which a person could obtain knowledge as such a rate
proportional to the
increase in our technology’s computation ability
.

Not only that
,

but communication would
increase between people
,

and at faster rates
.

Curre
ntly
,

people can use devices to connect to the
internet where available to obtain knowledge
.

What if the device was taken out and the
connection was instant

through some kind of fused piece of technology in people
?
Such
informational capabilities would sur
ely impact the way society functioned
.



Looking again at the impact on society such
technology would have, there would be
significant changes. The theory of a city
-
wide AI, like the one in New Mombasa, would probably
not be liked very much at first by peo
ple. The reaction would likely be a fear of an over
-
bearing
“big brother” staring down on everyone, in which everyone would be prone to blackmailing.
Even today, records of electronic money and communications can be used to gather information
on someone. V
iewing ones purchasing records is used today to target them for marketing of
certain products and movements. A close and personal experience for me would be the
advertisements on facebook, in which the algorithmic programs try to find products, groups, or
political figures that I would be interested in. After changing my relationship status, the sidebar
became flooded with dating websites. Approaching the midterm election, political figure and
campaign ads were domina
nt.


Aside from the possibly uncomfortable monitoring of people,
society would need to
adjust to having a different thinking body making certain choices or actions for people, such as
transportation manipulation

or even health monitoring
. It’s been observed tha
t the technolog
ical
capability

of such AI will soon become possible, and the science behind it to back such concepts
Gusman
10


up

in the future
. The factor that will determine its existence
would be the societal acceptance of
the technology. Economically, large
-
scal
e AI and individual human AI are not yet easily
feasible, but that’s only a matter of time.



Gusman
11


References

http://ftp
.
arl
.
army
.
mil/~mike/comphist/eniac
-
story
.
html

Red vs
.

Blue
.

Dir
.

Bur
nie Burns
.
" Perf
.

Burns
,

Burnie
.

2003
,

Web
.

18 Nov 2010
.

<http://roosterteeth
.
com>
.

<http://roosterteeth
.
com>
.

Smith
,

Will
,

Perf
.

I
,

Robot
.

Dir
.

Alex Proyas
.
" 20th Century Fox: 2004
,

DVD
.

Kurzweil
,

Ray
.

"Ray Kurzweil Explains the Coming Singularity
.
"
Bigthink
.

28 Apr 2008
.

Speech
.

Wenger
,

Joyce
,

Jack Opiola
,

and Ioannidis Tony
.

"The Intelligent Highway: A Smart Idea?
.
"
Strategy+Business

26 Feb 2008: n
.

pag
.

Web
.

18 November 2010
.

Bargh, John, and Ezequiel Morsella. "The Unconscious Mind."
Perspectives
on Psychological
Science

3.1 (2008): n. pag. Web. 18 Nov 2010.