From Ethics to Environmental Ethics: From the Nineteenth Century Biomedical Method to an Understanding of Complexity in the Twenty First Century.

designpadΤεχνίτη Νοημοσύνη και Ρομποτική

1 Δεκ 2013 (πριν από 3 χρόνια και 9 μήνες)

168 εμφανίσεις


1

From Ethics to Environmental Ethics:

From the Nineteenth Century Biomedical Method to an
Understanding of Complexity in the Twenty First Century.


Frances Robinson.

IEPPP, Lancaster University.

2004
-
03
-
03


ABSTRACT
.


Environmental ethicists have
the task of determining which factors should be taken into


account when making decisions that concern our interactions with non
-
human systems. In


this paper the task is examined from the perspective of our more recent understanding of


co
mplex systems. The discussion focuses both on the evolution of networks, and on the


properties of networks; and, in its conclusion, raises questions about our present


interaction with non
-
human systems.


Key Words: random networks, scale
-
free
networks, Barabási, Wolfram,



ethics, complexity and animal experimentation.


Traditional ethical theories are concerned with the interactions of human bei
ngs.
Environmental ethical theories are concerned with the interactions of human and
non
-
human
systems. As in traditional ethics, the ethical theories aim to offer guidelines on decision
-
making, by determining which factors should be taken into account whe
n trying to decide
which is the best action to take in particular situations. It has been necessary for environmental
ethicists to address the following questions:


a.

Can traditional ethical theories be expanded to include non
-
human systems?


b.

To which non
-
hu
man systems should these theories be extended; that is,
should they be extended to sentient beings, to biological organisms, or to
biological and non
-
biological systems?


On reflection, one realises that one cannot exclude any type of system from the ethic
al calculus.
One cannot assume
a priori

that non
-
living systems do not count. Nevertheless, it may be useful
to be able to distinguish between these different types of system: that is, between sentient and
non
-
sentient beings, and between living and non
-
li
ving systems.


What is a living system? Alan Cann makes a distinction between viruses and ‘living’ systems.
He defines viruses as sub
-
microscopic, obligate intracellular parasites with the following
features:




Virus particles are produced from the assembl
y of pre
-
formed components, whereas other agents ‘grow’
from an increase in the integrated sum of their components and reproduce by division.



Virus particles (
virions
) themselves do not ‘grow’ or undergo division.



Viruses lack the genetic information which

encodes apparatus necessary for the generation of metabolic
energy or for protein synthesis (ribosomes). (Cann, 1997, pp.1
-
2)


Cann’s distinction thereby highlights some of the properties of living systems;
namely,



2

a.

Growth from an increase in

the integrated sum of the system’s components.

b.

Reproduction by division.

c.

Having the genetic information necessary for the generation of metabolic
energy, and for protein synthesis.

From this one can conclude that a living system grows and divides, and is
equipped
with the instructions to direct the process by which it can grow and divide. Thus
growth
,
division

and
rules

are necessary features of a living system.


How does one discern whether or not that living system is sentient? What does
sentience imply?

Bernard Baars make the distinction between consciousness and
intelligence.

a.

Intelligence is the ability to solve problems; and problem
-
solving abilities are
highly
species
-
specific
. For example, pigeons excel at finding their way in air
space; and their a
bility to do so far exceeds the unaided abilities of humans.

b.

Consciousness is awareness; and the brain mechanisms of conscious alertness
and conscious perception have an extremely widespread distribution among
vertebrates and invertebrates. Species differ
ences, such as the size of the
neocortex, seem to be
irrelevant

to the existence of wakefulness and
perceptual consciousness. (Baars, 2001, p.33)

Since the 1920s it has been recognised that there is a major difference between the
electroencephalogram
s (EEGs) of waking consciousness and those of deep,
unconscious sleep. In waking consciousness
,
the EEG shows fast, irregular and low
voltage field activity throughout the thalamocortical core. Such activity supports the
reports of the conscious experience
s of humans. The underlying brain activity is so
similar in monkeys and cats, that they are routinely substituted for humans in
experiments. The EEG reflects the irregular firing of billions of single neurons, and
the complex interactions between them. In
contrast, in deep, unconscious sleep there
is slow, regular and high voltage field activity throughout the thalamocortical core. In
this case the EEG reflects the highly regular and highly synchronised firing patterns in
the same billions of individual neu
rons. The same pattern of a slow
-
wave, highly
synchronised EEG appears in other states of global unconsciousness, such as in
general anaesthesia, coma and epileptic ‘states of absence’. In all of these cases,
humans do not report the events that they exper
ience during the conscious waking
state.
All

mammalian species that have been studied so far exhibit the
same

massive
contrast in electrical brain activity between waking and deep sleep. Baars was unable
to find a single exception in the research findings.

There has been over 70 years of
highly consistent evidence. (Baars, 2001, p.36)


Waking consciousness is
not

some vaguely global property of the brain.
Rather it is dependent upon a few highly specific brain locations. In
all

mammals the
state of waking c
onsciousness requires the brainstem reticular formation and the
intralaminar nuclei of the thalamus. Brainstem mechanisms, like the reticular
formation, are extremely phylogenetically ancient, going back at least to the early
vertebrates. Thalamic structur
es, like the intralaminar nuclei, also exist in mammals
generally.
Both

of these facts indicate that the brain anatomy of waking consciousness
is very ancient indeed. (Baars, 2001, p.36)


There have been fewer studies of consciousness in
invertebrates

than

there
have been studies of consciousness in vertebrates. According to Sherwin, a central
difficulty is that:





3


Invertebrates have different sensory organs and nervous systems and so might perceive nociception


or pain in an entirely differe
nt way to vertebrates, but still experience a negative mental state.
----



[He continues:] [P]ublished studies
---

show that invertebrates such as cockroaches, flies and slugs


have short
-

and long
-
term memory; have age effects on memory; have co
mplex spatial, associative


and social learning; perform appropriately in preference tests and consumer demand studies; exhibit


behavioural and physiological responses indicative of pain; and, apparently, experience learned


helplessness. T
he similarity of these responses to those of vertebrates may indicate a level of


consciousness or suffering that is not normally attributed to invertebrates.


(Sherwin, 2001, pp.
104, 103)


It would appear that there is a behavioural analogy, but not an anatomical analogy.
Therefore one can claim neither experiential analogy, nor causal analogy.

Thus, even though it may be possible to distinguish between living and non
-
living
syste
ms, establishing the boundary between sentience and non
-
sentience in living
systems appears to be problematic.


In addressing the environmental ethicists’ first question, one requires to be familiar
with the traditional ethical theories in Western Philosop
hy. The bases of, and some of
the problems with, these theories are as follows:


A.

The
deontological

theory


as developed by Immanuel Kant, proposes that the
right action is determined by the
act

itself. Kant’s theory evolves around the
moral law known as t
he
Categorical Imperative
, which implies that an act is
right, if it can become a universal principle. According to Kantian theory some
acts are obligatory, some acts are prohibited, and some acts are permissible.
Kant based his argument upon these differe
nt acts being duties, and thus his
ethical theory is a duty
-
based theory. Kant then argued that only rational
beings would choose to act out of duty rather than out of inclination, and so he
argued that only rational beings would have moral worth, and ther
efore only
rational beings would have intrinsic value. According to Kant,



Beings whose existence depends, not on our will, but on nature, have none the less, if they are non
-


rational beings, only a relative value as means and are consequently
called
things
. Rational beings,


on the other hand, are called
persons
because their nature already marks them out as ends in


themselves
-

that is, as something which ought not to be used merely as a means
-

and consequently


imposes to tha
t extent a limit on all arbitrary treatment of them. (Kant, 1991, pp.90
-
91)
It is not appropriate to address all of the arguments and counter
-
arguments in relation
to Kant’s theory in this paper. Nevertheless, of note are the following:


a.

One can

readily imagine a situation in which a prohibited action does appear
to be the appropriate action to take. For example, stealing is a prohibited act in
deontological ethics. However, one might condone a father stealing food for
his starving child, if ther
e was no other possible means of acquiring food for
that child. This would appear to make the claim that such an act is a
duty

disputable. It may be more appropriate to consider the Categorical Imperative
as being a moral guideline rather than a moral law.


b.

By the exclusion of all but rational beings from having moral worth, Kant has
markedly reduced the boundaries of that which is to be morally considered.
However, trying to ascertain rationality in sentient beings with which one
cannot communicate suffici
ently well, if at all, is problematic; particularly

4

when these different sentient beings are addressing different problems. A
further difficulty lies in our understanding of the term ‘rationality’; namely,
whether one interprets rationality as being based
upon a classical or a ‘fuzzy’
logic.


Thus it would appear that the deontological theory, as presented by Kant, is an
inappropriate ethical theory for environmental ethics.


B. The
utilitarian

or
consequentialist

theory


was developed by Jeremy Bentham.
According to Bentham,



The day
may
come when the rest of the animal creation may acquire those rights which never could


have been withholden from them but by the hand of tyranny. The French have already discovered


that that the blackness of

the skin is no reason why a human being should be abandoned without


redress to the caprice of the tormentor. It may one day come to be recognised that the number of


legs, the villosity of the skin, or the termination of the
os sacrum

are reaso
ns equally insufficient for


abandoning a sensitive being to the same fate. …. The question is not, Can they
reason?

Nor Can


they
talk
? But, Can they
suffer
? (Singer, 1995, p.7)


The right action in this ethical theo
ry is determined by the
consequences
of the act;
such that the right action is the one that produces the greatest happiness for the
greatest number. However, this theory is also problematical. Criticisms of the theory
are as follows:



That there is a risk o
f tyranny by the majority.



That there is a lack of restrictions upon actions.



That the theories are impersonal.



That it is questionable that all of the consequences are predictable.


Once again it is inappropriate to address the many arguments and counter
-
arguments
associated with this theory. However, of note are the first and the last criticisms.
Much of public policy is based upon a utilitarian ethic. However, a safeguard against
the risk of tyranny by the majority is addressed by the introduction of hum
an rights.
Human rights have a deontological basis. One can then see that it is difficult to
safeguard all but those that we are able to define as rational beings, whatever rational
implies, from the risk of tyranny by the majority. Secondly, this ethical
guideline
depends upon the consequences of the act being predictable. With our more recent
understanding of the unpredictability of consequences, it would seem that a utilitarian
ethic is an inappropriate ethical theory for environmental ethics.


At this
point it seems appropriate to return to the questions that it has been necessary
for environmental ethicists to address; namely,


a.

Can traditional ethical theories be expanded to include non
-
human systems?


b.

To which non
-
human systems should these theories b
e extended; that is,
should they be extended to sentient beings, to biological organisms, or to
biological and non
-
biological systems?


It appears that environmental ethicists
cannot
just extend traditional ethical theories. It
also appears that although w
e may be able to make a distinction between
non
-
living

5

and
living

systems, we are less able to distinguish between sentience and non
-
sentience in living systems.


New ethical theories that relate to the environment have been advanced. Of note is
Aldo Leopo
ld’s
Land Ethic
, in which the land is considered to be a community.

According to Leopold,


An ethic, ecologically, is a limitation on freedom of action in the struggle for existence. An ethic,
philosophically, is a differentiation of social from anti
-
soci
al conduct. These are two definitions of one
thing. The thing has its origins in the tendency of interdependent individuals or groups to evolve modes
of co
-
operation. (Leopold, 1949, p.202)


He continues,

Land … is not merely soil; it is a fountain
of energy flowing through a circuit of soils, plants, and
animals. Food chains are the living channels which conduct energy upward; death and decay return it to
the soil. The circuit is not closed; some energy is dissipated in decay, some is added by absor
ption from
the air, some is stored in soils, peats, and long
-
lived forests; but it is a sustained circuit, like a slowly
augmented revolving fund of life. …. When a change occurs in one part of the circuit, many other
parts must adjust themselves to it.
Change does not necessarily obstruct or divert the flow of energy;
evolution is a long series of self
-
induced changes, the net result of which has been to elaborate the flow
mechanism and to lengthen the circuit. Evolutionary changes, however, are usually
slow and local.
Man’s invention of tools has enabled him to make changes of unprecedented violence, rapidity, and
scope. (Leopold, 1949, pp.216
-
217)


Donald Worster also acknowledges the importance of the rates of change. According
to Worster,


Env
ironmental conservation becomes … an effort to protect certain rates of change going on within the
biological world from incompatible changes going on within our economy and technology. …. The
pace of innovation in computer chips may be appropriate to a co
mpetitive business community, but it is
not appropriate to or always compatible with the evolution of a redwood forest.


(Worster, 1994, pp.432
-
433)


Wi
th regard to the development of an environmental ethic, Leopold writes:


The evolution of a land ethic is an intellectual as well as an emotional process. Conservation is paved
with good intentions which prove to be futile, or even dangerous, because they
are devoid of critical
understanding either of the land, or of economic land
-
use. ….

The mechanism of operation is the same for any ethic: social approbation for right actions: social
disapproval for wrong actions. By and large, our present problem is one

of
attitudes
and
implements
.
We are remodelling the Alhambra with a steam
-
shovel, and we are proud of our yardage. We shall
hardly relinquish the shovel, which after all has many good points, but we are in need of gentler and
more objective criteria for i
ts successful use. (Leopold, 1949, pp.225
-
226) (My italics)


Although Leopold wrote his text over 50 years ago, the wisdom of his words
harmonises perfectly with more recent scientific discoveries; that is, with our
recognition of the interconnectedness of

systems that range from systems on a Cosmic
scale to the system of an individual cell; and from the latter to those that arise at the
scale of the elementary particles. His prophetic words about the heavy
-
handedness
and insensitivity of our actions are no
w keenly understood. Our mission therefore has
been, and must be, to understand better the nature of that interconnectedness, and to
learn how to act more co
-
operatively


more socially


within this much wider
framework.




6

In the physical sciences, there

have been recent advances in an understanding of
networks; both in the topology of networks, and in network evolution. Of note are the
research findings of Albert
-
Lászlo Barabási et al and of Stephen Wolfram. The results
of their research will now be disc
ussed; and it would appear that their different
research findings are mutually consistent.



1.
The research findings of
Albert
-
Lászlo Barabási
et al.



Graph theory began with Leonhard Euler, who introduced the idea of nodes connected
b
y links. (Barabási, 2002, p.10)


Paul Erdős and Alfréd Rényi were interested in the formation of networks, which, like
graphs, consisted of nodes and links. They were looking for a single framework with
which to study different networks. For this framework
, they chose
random

graphs.
Erdős and Rényi discovered that if there were enough links between the nodes, such
that the average number of links per node was equal to 1, a giant cluster emerged.
The
threshold was one link.

As the number of links per node in
creased, the number of
nodes excluded from the cluster decreased exponentially. In
regular
graphs, each
node has
exactly

the same number of links. This is not the case with random graphs.
However, if a
random

network is sufficiently large, each node will h
ave
almost

the
same number of links.

The random theory of Erdős and Rényi equated complexity
with randomness; and their theory has dominated scientific thinking about networks
since its introduction in 1959. (Barabási, 2002, pp.17
-
23)


In 1929 the Hungarian writer, Frigyes Karinsky, wrote a
book of short stories


the
English version of the title being
Chains
. The concept that Karinsky introduced was
that each person could be linked to every other person through five acquaintances. In
1967 Stanley Milgram used this concept in a study of inter
connectivity. He found that
the median number of intermediate persons was 5.5. (Barabási, 2002, pp.25
-
29)


The vision of Tim Berners
-
Lee in the 1980s was that all the computers around the
world could be connected. This vision is now a reality in the form
of the virtual
network of the WWW. Steve Lawrence and Lee Giles at the NEC Research Institute
tried to estimate the size of the WWW; and discovered that it was growing faster than
the growth of human society. At the end of 1998, Albert
-
László Barabási, Rék
a Albert
and Hawoong Jeong set out to discover the size of the world behind the Web. At that
time the WWW was estimated to have contained ~800 million documents, and the
diameter of the Web was predicted, using empirical data and statistical mechanics, to
be 18.59 (19); in other words, the degree of separation between documents was ~19.
(Barabási, 2002, pp.30
-
34) Further research has shown the following degrees of
separation:

a.

2 links in species webs.

b.

~3 chemical reactions in molecules in the cell.

c.

4
-
6 links

between co
-
authors of scientific papers.

d.

14 synapses between the neurons in the brain of the worm,
Caenorhabditis Elegans.

(Barabási, 2002, p.34)

Research was beginning to show that exceedingly large networks collapse to a
separation that is far short
er than the number of nodes present in the network.



7


This small world phenomenon appears to be a generic property of networks
.


In 1973 Mark Granovetter published a paper entitled ‘The strength of weak ties’.
Through his research Granovetter had disc
overed an image of
society that differed
from the random networks of Erdős and Rényi. Granovetter’s image of society was of
highly connected clusters connected by a few external links, which he referred to as
‘weak ties’. (Barabási, 2002, pp.41
-
43)

In the mid
-
1990s Duncan W
atts and Steven Strogatz set out to measure clustering,
and introduced a quantity known as the ‘clustering coefficient’, which is the ratio of
the number of actual links to the number of possible links between nodes. From later
research it would appear tha
t the most important discovery of Watts and Strogatz is
that:



Clustering is a generic property of networks.


(Barabási, 2002, pp.46, 50
-
51)

The following are examples of networks that show such clustering:



a.

The wiring of the nervous system of the worm,
Caenorhabditis Elegans
.


b. The electricity network of the Western United States of America.

c. The network of Hollywood actors.

d. The web pages of the WWW.

e. Companies linked by joint ownership.

f. Species feeding on each other in ecosystems.

g. Molecules within the cell. (Barabási, 2002, pp.50
-
51)


However, Watts and Strogatz recognised that there was a price to pay for high
clustering; namely, that the small world phenomeno
n could be lost. They realised that,
in reality, links do exist between distant clusters, and when they added even a few
random links

between clusters in their model, the average separation between the
nodes was drastically reduced. These few links did not

affect the clustering
coefficient, but they did spectacularly reduce the separation between nodes.
(Barabási, 2002, pp.52
-
53)


The models of both Erdős and Rényi, and of Watts and Strogatz, are deeply
egalitarian with regard to the distribution of links to nodes. However the findings of
Barabási et al could not be adequately represented by either of these models. Barabási
et al di
scovered that some nodes were much more highly connected than others; this
discovery was corroborated by the findings of Malcolm Gladwell, who, in
The
Tipping Point
, concluded that in every walk of life there are a handful of people who
are very highly con
nected. Gladwell referred to these people as ‘connectors’.
(Barabási, 2002, pp.54
-
55)


It appears that ‘connectors’, also known as ‘hubs’, are a generic property of
networks; and that it is the domination of the structure of a network by hubs

that leads t
o the small world phenomenon.


Barabási et al discovered that the distribution of links in a network dominated by hubs
followed a
power law
. This was of particular interest, as most quantities in Nature
appeared to follow a bell
-
curve distribution. A power

law distribution indicates that

8

many small events are co
-
existing with a few very large events; and the histogram
following a power law is an asymptotically decreasing curve. In contrast, exceedingly
large events are forbidden in a bell
-
curve distribution
. Barabási et al concluded that
there were two types of network: one that could be represented by a bell
-
curve
distribution, and the other that could be represented by a power law.

The bell
-
curve distribution represents a
random
network, in which most of
the nodes
have approximately the same number of links; and this is shown by a peak degree
distribution. It is because of this
peak degree distribution
, that a random network has
a characteristic
scale

embodied in the average node.

The network represented
by a power law differs in that there is
no

peak distribution.
In such a network it is the
hubs

that hold the network together. The network following
a power law distribution has a
hierarchy

of nodes; and, since there is no peak
distribution, there is no ch
aracteristic scale. Barabási referred to such networks as
‘scale
-
free’

networks. According to Barabási, the finding that most networks of
conceptual importance are scale
-
free has not only given legitimacy to hubs, but has
also provided proof of the highly
important
organising principles

of network
evolution. (Barabási, 2002, pp.67
-
72)


In 1971 Kenneth Wilson proposed an all
-
encompassing theory of phase transitions;
namely,
renormalization
. He made an assumption that at the critical point the laws of
physic
s apply identically at all scales. By giving a rigorous mathematical foundation
to scale invariance, his theory produced power laws each time the critical point was
approached. It would appear that the emergence of power laws is Nature’s
unmistakeable sign

that disorder is being replaced by order. The appearance of power
laws at the critical point in the transition from disorder to order is the same for a wide
range of systems; it is a form of universal behaviour. The hubs in networks are the
consequence of

power laws, and thus suggest self
-
organisation and order.

(Barabási, 2002, pp.76
-
78)



Power laws signify self
-
organisation in complex systems.


The random networks of Erdős and Rényi rested upon the two following assumptions:

a.

That the number of nodes in the network remained the same.

b.

That all of the links were equivalent.

Most real networks grow, and Barabási et al incorporated growth into their

models.

In addition, they realised that hubs attracted links. They discovered that the
mechanism behind scale
-
free networks was
growth

and
preferential attachment
.
With preferential attachment, the rich
-
get
-
richer pattern appears.

(Barabási, 2002, pp.81
-
88) Barabási et al also realised that new links could emerge
spontaneously between already existing nodes; that both nodes and links could
disappear; and that links could be rewired. Barabási and Albert investigated the
effects of internal links and of r
ewiring on the structure of scale
-
free networks.

Luis Amaral, Gene Stanley, Antonio Scala and Mark Barthélémy demonstrated that if
nodes fail to acquire links after a certain age, the size of the hubs would be limited,
making large hubs less frequent than

predicted by a power law. José Mendes and
Sergev Dorogovtsev discovered that, by assuming that nodes slowly lose their ability
to attract links as they age, they were able to show that gradual aging does not destroy

9

the power laws, but merely alters the n
umber of hubs by changing the degree
exponent
1
. (Barabási, 2002, pp.89
-
90)


The scale
-
free model has led to a new modelling philosophy by viewing
networks as
dynamic
systems and
not
static systems. It has also led to the
understanding that structu
re
cannot
be divorced from network evolution.

(Barabási, 2002, pp.90
-
91)



The scale
-
free topology of networks is evidence of organising principles


that are acting at each stage of the network evolutionary process.


Barabási et al realised that, in

real life, nodes are not identical. They also realised that
the ability of a node to attract links depended not only upon the number of links that
the node already had, but also upon the characteristics of the node itself. The ability to
attract links res
ulted from a ‘fitness
-
connectivity’ product. (Barabási, 2002, p.96) One
of his students, Ginestra Bianconi, discovered that some networks could evolve in
such a way that one node took all of the links. Such a network had a star topology; in
other words, th
e winner took all. (Barabási, 2002, p.102)



The topology and the behaviour of a network are determined


by the fitness distribution.



In networks in which the nodes have comparable fitness, the distribution of links
appeared to follow a na
rrowly peaked bell
-
curve. Bianconi’s calculations indicated
that all networks fall into two categories:

a.

Those in which the competition does
not

have an easily noticeable impact


on the network’s topology.

b.

Those in which the winner takes all.

Most
networks are in the first category. In these networks, the scale
-
free topology
survives, and the rich
-
get
-
richer behaviour is displayed. However, the fittest node has
only a
modest

lead. In the second category, where all the nodes are connected to a
centra
l node, there is an
enormous difference

between the fitness of the central node
and the fitness of all other nodes. There is no room for a potential challenger


there is
no competition. (Barabási, 2002, pp.102
-
103)




Thus it would appear that
competition is essential for maintaining a scale
-
free


topology, and for preventing the winner from taking all.


Further discoveries concern network navigability. Barabási et al discovered that
starting from any page one can reach only ~24% of all
the documents on the WWW.
For technical reasons this appears to result from the fact that the web is
directed
. This
means that along a given URL one can only travel in one direction. In contrast in a
non
-
directed web the links can be followed in either dir
ection. In a directed web the
route may become circuitous with disjointed paths; and it is this that determines the
web’s navigability. It was discovered that the WWW was effectively divided into four
continents as follows:






1

The degree exponent is the parameter that characterises the power law distribution.


(Barabási, 2002, p.88)


10

a.

The
Core


which contained all

the major websites, and which was
easily navigable;

b.

The
IN
continent


from which it was possible to reach the core, but
impossible to reach the In continent from the core;

c.

The
OUT
continent


in which the reverse was true; and

d.

A fourth continent that con
sisted of
tendrils
and
islands
, which
consisted of isolated groups of interlinked pages that were isolated from
the core continent.

Most networks may be non
-
directed, but, like the WWW, food webs are directed. It is
believed that all directed webs break up

into these four different continents.

(Barabási, 2002, pp.165
-
169) As well as continents, communities were found in the
WWW. Web sites were often connected to other web sites that shared similar
viewpoints. (Barabási, 2002, p.170)


The robustness and the
vulnerability of networks appear to be directly related to their
topology.


A common feature of most systems that display a high tolerance level to failure is
that the functionality is guaranteed by the highly interconnected nature of the
complex system
.


Decades of research have shown that the breakdown of a network is not a gradual
process. When the number of nodes removed reaches a
critical point
, the network
abruptly fragments into tiny unconnected islands. Barabási et al discovered that they
could
remove 80% of all nodes in their scale
-
free model without the network breaking
apart. (Barabási, 2002, p.113)

Scale
-
free networks have an inherent resilience that is not shared by random
networks. The source of this resilience is the presence of hubs. The

reason for this is
that failures do not discriminate against hubs; and, since there are numerous small
nodes and only a few hubs, it is less likely that one of the hubs in a network will fail
than it is that one of the smaller nodes will fail, assuming th
at each node has an equal
chance of failure. (Barabási, 2002, p.114) One could add that if one did not assume
that each node had an equal chance of failure, it would still seem less likely that a hub
would fail as readily as a less well connected node, si
nce a hub is fitter than a less
well connected node.


With regard to vulnerability, the same is true for random attacks. Shlomo Havlin,
along with his students, calculated the fraction of nodes that needed to be removed
from a randomly chosen network (rand
om or scale
-
free) before the network broke
apart. He discovered that
random

networks break apart after a critical number of
nodes have been removed. However, he discovered that if a
scale
-
free

network had a
degree exponent ≤ 3, the network would only break

apart if
all

of the nodes were
removed. Most networks of interest are of this scale
-
free type and have a degree
exponent of < 3, and therefore they are highly resilient to random attacks.

(Barabási, 2002, p.pp.114
-
115)

However, scale
-
free networks are e
xtremely vulnerable to
intelligent
attack: take out
some of the hubs and the network suddenly collapses. It appears that the critical point
re
-
emerges with intelligent attack. (Barabási, 2002, p.116)





11


Both the robustness and the vulnerability of

a scale
-
free network appear to be


rooted in the network’s topology.


To summarise Barabási’s research findings, the characteristics of the different types
of network are compared in Table 2.


Table 2


Type of network Symmetry (in number of links
) Competitiveness of each node


1.
Regular

Symmetrical Equally competitive


2.
Random

Slightly asymmetrical Almost equally competitive.


3.
Scale
-
free


Asymmetrical Unequally competitive.


4
. Star

Grossly asymmetrical No competition.




Of note is that random networks can be subdivided into two groups: nam
ely,
equilibrium (Classical)

random

networks in which the number of nodes remains
constant; and
non
-
equilibrium random networks

in which there is
growth

in the
number of nodes. (Dorogovtsev & Mendes, 2003, p.8)

By looking down Table 2 from a regular to a s
tar network, one can see that there is an
increasing
asymmetry

in the topology of the networks. This increase in asymmetry is
consistent with there being an increase in the difference between the fitness
-
connectivity products of each of the nodes in the ne
twork. Of note is that there is a
much higher degree of clustering in scale
-
free networks than there is in Classical
random networks. (Dorogovtsev & Mendes, 2003, p.48)


What does this tell us about the different networks themselves? Networks with regular
and star topologies are unlikely to change, unless change is thrust upon them.
G
rowing
networks


both random and scale
-
free
-

are dynamic networks, and are
continually changing.



It would appear that the
degree of asymmetry

between the fitness
-
conne
ctivity


products of
each
of the nodes is of central importance in the evolution of a


network’s topology.


As is shown in Table 2, there is a difference in the degree of asymmetry between
random and scale
-
free networks: the difference between t
he fitness
-
connectivity
product of each node in the network being greater in scale
-
free networks than in
random networks. This difference between the two types of network influences the
way that each type of network evolves. In a
random

network there is on
ly a small
difference in the fitness
-
connectivity product of each of the nodes, and thus there is
only a small difference in the ability of each node to attract links. This means that
each node has almost the same number of links; and the distribution of l
inks in a
Classical

random network can be shown graphically to have
a bell
-
curve
distribution
. Since there is a peak distribution of links, the network contains a

12

characteristic scale. The distribution of links in a
growing
random network is
represented gr
aphically by a
rapidly decreasing curve
. A s
cale
-
free

network, on the
other hand, is represented graphically by
an
asymptotically

decreasing curve
; and
since there is no peak distribution of links, it does
not

have a characteristic scale.
Since each networ
k is of a finite size, there is a cut
-
off point for this asymptotically
decreasing curve; and the term used to describe this type of curve is ‘
fat
-
tailed
’. The
curve is evidence of a power law distribution of links; and it is the power law that
determines
the hierarchical structure of the network
2
.

(Dorogovtsev & Mendes, 2003, pp.26
-
28)


The
properties
of a network depend upon the way in which the nodes are connected.
Thus the properties of a network depend upon the
topology

of the network, and
therefore d
epend upon
the fitness
-
connectivity product of
each

node
. Scale
-
free
networks evolve according to a power law; and this mode of evolution results from
the
degree
of asymmetry in the fitness
-
connectivity products of the nodes. The
evolution of a random netw
ork depends upon a
lesser degree

of asymmetry.





2




Fig.2.1

Scheme of distribution of links over the Erdős
-
Rényi



(Classical) equilibrium random graph.


(Dorogovtsev & Mendes, 2003, p.26)



Fig.2.2
Scheme of distribution of links over the network that grows



under the mechanism of random linking.


(Dorogovtsev & Mendes, 2003, p.27)



Fig.2.3

Scheme of distribution of links over the network that grows



under the mechanism of preferential linking.


(Dorogovtsev & Mendes, 2003, p.28)



13

Of interest is the fact that random networks have a characteristic scale, and scale
-
free
networks follow a power law. The evolution of each type of network therefore
appears to be dependent upon a rule or rul
es. This is of particular interest when we
consider that growth, division and rules are necessary features of living systems. In
order to try to better understand the ability of systems to evolve according to rules, I
have been looking at the work of Steph
en Wolfram.



2.

The research findings of
Stephen Wolfram
.



Any computer program can be thought of as consisting of a set of rules that specify
what should happen at each step in the program. Stephen Wolfram has discovered
some remarkable features about the
behaviour of systems following very simple rules.
He has been able to make his discoveries as a result of the following:

a.

His approach.


b. The development of very powerful computers in the early 1980s.

With the development of the
early
c
omputers, it had been hoped that it would be
possible to use computer models to simulate biological systems. It was soon realised
that this was practically problematic, since even the very simplest biological
behaviour seemed to require very complicated co
mputer programs. However, in the
programming, set goals had been required. The research was
directed
. Wolfram was
curious to know how systems would evolve when there were no preconceived ideas
about the end result. In the early 1980s Wolfram looked at cell
ular automata, since
they could be represented visually. The following five diagrams illustrate the
behaviour of cellular automata that are following five different but
very
simple rules.
These rules are based upon the colour (black or white) of the cell a
nd the colour of the
cell’s immediate neighbours. There are therefore eight possible combinations.
Wolfram discovered that with rules involving two colours and nearest neighbours,
256 cellular automata are possible. The behaviour of each system differs mar
kedly
from the behaviour of the other systems.







Diagram A

(Wolfram, 2002, p.24)


14

Diagram A

illustrates the behaviour of a cellular automaton that starts
from a single
black cell, and follows the simple rule:



The next cell is white, only if the cell is white and both of its neighbours are white.


As the cell is black, the next cell
has
to be black.

There are eight possible options, if one considers the p
ossible colour combinations of
the cell and its immediate neighbours. However, with this particular cellular
automaton rule, the next cell has 1 chance out of eight of being white. Unless the
initial condition consists of all white cells, the final end sta
te for this particular
cellular automaton will
always

be all black cells.




Diagram B
(Wolfram, 2002, p.25)


Diagram B

illustrates the behaviour of a cellular automaton that again

starts from a
single black cell, but this time follows the simple rule that:


The next cell is white, if the cell is white, and both of its neighbours are white, or




if the cell is black, and both of its neighbours are white.


There are eigh
t possible options, if one considers the possible colour combinations of
the cell and its immediate neighbours. Thus one can say that the next cell has 2
chances out of eight (25%) of being white.








Diagram C

(Wolfram, 2002, p.25)


15

Diagram C

illustrates the behaviour of a cellular automaton that again starts from a
single black cell, but this time follows the simple rule:


The next cell is white
-

if the cell is white, and both of
its neighbours are white;
or



-

if the cell is white and both of its neighbours are black,
or


-

if the cell is black and both of its neighbours are white,
or


-

if the cell is black and both of its neighbours are blac
k.


There are four options for the cell to be white. Likewise, there are four options for the
cell to be black. In other words, the next cell has a 50/50 chance of being black or
white.








Diagram D 1

(Wolfram, 2002, p.27)



Diagram D 2

(Wolfram, 2002, p.29)


16

Diagram D 1

and
Di
agram D 2

illustrate the behaviour of a cellular automaton that
starts from a single black cell and follows the simple rule:


The next cell is white
-

if both of its neighbours are black, and the cell is white
, or




-

if both of its neighbours are white, and the cell is white, or


-

if both of its neighbours are black, and the cell is black, or


-

if the cell and

the neighbour to the
left

of the cell are black.


The next cell has four options of being white and four options of being black. Again
the next cell has a 50/50 chance of being black or white. Of note, however, is the
asymmetry
in the rule
; and both the a
symmetry and the complexity of the behaviour.









Diagram E 1

(Wolfram, 2002, p.32)



17


Diagram E 2

(Wolfram, 2002, p.33)


Diagram E 1

and
Dia
gram E 2

illustrate the behaviour of a cellular automaton that,
again starting from a single black cell, follows the simple rule:


The next cell is white
-

if the cell is white, and both of its neighbours are white
, or





-

if the cell is black, and both of its neighbours are black,
or


-

if the cell and the neighbour to the
right
of the cell are white.


The next cell has three options of being white, and five options
of being black. Of
note again is the
asymmetry

in the rule
. However, unlike the cellular automaton in
Diagram D, the next cell does
not
have an equal chance

of being black or white.


These extremely simple cellular automaton rules illustrate the four funda
mentally
different classes of behaviour that result from running simple computer programs.


a. Diagram A illustrates a system that almost always evolves to the
same end point
,


and is classified as a
Class 1

system.


b. Diagram B illustrates
repetiti
ve
behaviour; and is classified as a
Class 2

system.


Diagram C illustrates a
nested
pattern, which is slightly more complicated than the


repetitive pattern illustrated in Diagram B, but is nevertheless repetitive. Thus it is


also a Class 2

system.


c. Diagram D illustrates some
apparently random

behaviour; and is classified as a


Class 3

system.



d. Diagram E illustrates the formation of
localised structures that interact in complex


ways
; and is classified as a
Class 4

system.


18

T
hese findings show that simple rules can lead to highly complex behaviour. Indeed
Wolfram discovered that:


1.

The phenomenon of complexity is quite universal, and is quite independent
of the details of particular systems.
(Wolfram, 2002, p.105)



2.


At least beyond a certain point, adding complexity to the underlying rules


of a system does not lead to more complex behaviour. More complicated


rules may be useful in reproducing the details of a particular system, even



though it does not add fundamentally new features.

(Wolfram, 2002,pp.106
-
107)


3.

Complex behaviour almost never occurs except when large numbers of
cells are active at the same time. There seems to be a significant correlation
between the ov
erall activity and the likelihood of complex behaviour.

(Wolfram, 2002, p.76)


It would appear that:


a.

If the rules are sufficiently simple, the system will only produce repetitive
behaviour.

b.

If the rules are slightly more complicated, then nesting behaviou
r will often
occur.

c.

Some
threshold

in the complexity of the
underlying rules

is required; beyond
which complexity in the overall behaviour of the system is seen. This
threshold can be extremely low. However, there appears to be no
clear

correlation between

the complexity of the rules and the complexity of the
behaviour they produce, since highly complex rules can lead to very simple
behaviour. (Wolfram, 2002, p.106)


To understand how a system develops complexity, one needs to look at the
mathematics. In tr
aditional mathematics, it is the
size
of the number that has
significance; whereas in computer programming, it is the
digit sequence

of the
number that has significance. Certain sequences of numbers are, in a sense,
intrinsically complex. Such sequences in
clude the sequence of prime numbers, the
digit sequence of
π
, and the digit sequences of imperfect squares, cube roots, fourth
roots, logarithms and exponentials. To find out why a sequence of numbers leads to
complex behaviour, one needs to look at each number

in terms of its digit sequence
.

It would appear that
complexity arises as a fundamental consequence of the rules by
which the sequence is generated. As far as Wolfram could ascertain, almost all of
these types of numbers have apparently
random
digit sequences; and it is only
rational
3

numbers that have repet
itive digit sequences.

(Wolfram, 2002, p.142)


Wolfram looked at iterated maps and the chaos phenomenon. The basic idea of an
iterated map is to take a number between 0 and 1, and to update that number in a
sequence of steps following a simple rule to yie
ld another number between 0 and 1.
He found that such systems were sensitive to the initial conditions; such that if the
initial digit sequence had been
apparently random

(his example was
π
/ 4), then, as



3

A rational number consists of a whole number divi
ded by a whole number. For example: ½.


19

the system evolved, with the digits moving effectiv
ely from the right side to the left
side on the screen, the significance of each digit would increase. Such an iterated map
was referred to as a ‘shift map’. (Wolfram, 2002, pp.149
-
150)


However Wolfram’s
discovery

was that:


Certain systems following sim
ple rules
generated

complexity, even with very
simple initial conditions.

(Wolfram, 2002, pp.149
-
151)


However being able to
generate
randomness does not imply that such systems cannot
show sensitivity to their initial conditions.


Thus the presence of
sensitive dependence upon initial conditions:


a.

Does
not

imply that it is indeed that which is responsible for the
randomness and complexity in systems.


b.

Nor
, on its own, does it make any contribution at all to the
ultimate
generation

of randomness.



(Wolfram, 2002, pp.154
-
155)


In addition, Wolfram found that although all standard mathematical functions, as
defined in
Mathematica
, produced simple curves, combina
tions of these functions
produced more complicated results. (Wolfram, 2002, p.145)



In substitution systems, the replacement of a particular element by a block of
elements leads to a nested structure. (One can also look at this as being an element
being
replaced by a block of smaller elements.) Nesting occurs because, on
subsequent steps,
each

of these new elements is in turn replaced with the same block
of elements in exactly the same way; such that each element ultimately evolves to
produce an identical

copy of the whole pattern.
In substitution systems where black
cells are arranged on a grid, different squares never overlap. However when
a geometric
rule

is used to replace each black cell, overlapping is possible.
The building up of
patterns by
repeate
dly applying geometric rules
4

is central to
fractal

geometry
.
What these rules have in common is that they involve replacing one element by two
or more smaller elements. Thus it is ultimately inevitable that all of the patterns that
are produced will have
a completely regular nested structure.

(Wolfram, 2002, pp.187
-
190)




To get more complicated structures, some kind of
interaction

is necessary


between the different elements.


This means that the replacement of one element at a particular
step depends not only
upon the characteristics of the element itself, but also upon the characteristics of other
neighbouring elements. However, with a geometric replacement rule, elements may
finish up on any plane; thus it is difficult to define an obvio
us notion of neighbours.
(Wolfram, 2002, p.190)




4

Programs tend to involve discrete systems; whereas geometric rules tend to involve continuous
systems.



20

In
cellular automata
, the elements are always set up in a regular array that remains the
same at each step. In
substitution systems

with geometric rules, the elements are
ultimately constrained to lie in a 2
-
D plane. Thus for both systems there is effectively
always a fixed underlying geometric structure that remains unchanged as the system
evolves. Wolfram showed that it was possible to remove this underlying structure by
using one version of a ‘network syst
em’.

(Wolfram, 2002, p.193)


A node can have any number of connections; and each connection can either:

a.

Link the node to a different node, or

b.

Loop back to the node itself.

As the number of nodes increases, the number of
possible

networks grows very
rapid
ly, as is shown in Diagram F.


Diagram F (Wolfram, 2002, p.194)


In Diagram F each node has
2
connections. With one node, one network is possible.
With 2 nodes, three different networks are possible. With 3 nodes, 14 different
networks are possible.


Ther
e is nothing intrinsically 1
-
D about the structure of network systems. By
rearranging the connections, one can get a network that looks like a 2
-
D rather than a
1
-
D array. With appropriate connections it is possible to get a 3
-
D array; or, indeed,
an array

with
any number

of dimensions. Even when the nodes are identical, it is
possible to produce structures with different numbers of dimensions by
changing the
patterns of connections

between the nodes.

Even with the same patterns of connections, nodes can b
e
arranged differently

to
produce different patterns; for example, an array, a tree or a nested structure.
(Wolfram, 2002, pp.195
-
197) However, the properties of a system do
not
depend
upon any specific layout of the nodes. Rather:



The properties of the

system depend upon the way the nodes are connected


together.

(Wolfram, 2002, p.193)


How are networks transformed from one step to the next? The basic idea is that there
are
rules
to specify how each connection coming from each node should be r
e
-
routed

21

on the basis of the local structure around that node. Wolfram was able to show that as
soon as there is any dependence upon the
longer
-
range

features of the network, more
complicated behaviour was immediately possible. Even the total number of nod
es at
each step can vary in a way that appears to be random.

(Wolfram, 2002, pp.200
-
201)


From completely random initial conditions, many systems tend to organize
themselves. Fig.1a and Fig.1b illustrate the behaviour of different Class 1 systems; the
end
state of each system being either all black or all white.




Fig.1a

(Wolfram, 2002, p.224)





Fig.1b

(Wolfram, 2002, p.224)




Other cellular automata produced slightly more complicated behaviour. Once again,
from completely random ini
tial conditions, each system quickly organised itself into a
stable state. Here the stable state involved a collection of definite structures that either
remained fixed on successive states, or repeated at periodic intervals. Examples of this
are shown in
Fig.2
.



22



Fig. 2

(Wolfram, 2002, p.225)







From completely random initial conditions, other cellular automata produced very
complicated behaviour indefinitely. The behaviour appeared to be, in many respects,
random. However, featured in the pattern were definite white triangles and other smal
l
structures, which indicated that a
certain degree of organisation

was taking place.
(Wolfram, 2002, p.226) An example of this is shown in
Fig.3
.


23




Fig. 3

(Wolfram, 2002, p.226)


Howe
ver the
greatest complexity

seemed to lie between the stable state and the
apparently random state.
Such systems neither stabilised completely, nor exhibited
close to uniform randomness.

The cellular automaton quickly organised itself into a
set of definit
e
localised

structures
. The structures did
not
remain fixed. Rather, they
moved around
, and
interacted with each other

in complicated ways. The result of
this behaviour was an elaborate pattern in which there was a mixture of order and
randomness. (Wolfram
, 2002, p.228) An example of this type of behaviour is seen in
Fig.4
.


24



Fig. 4
(Wolfram, 2002, p.229)


Wolfram discovered that, although for almost every rule the specific pattern

produced is somewhat
different in detail
, the number of fundamentally different
types

of pattern is very limited. The patterns that arose could almost always be assigned
quite easily to one of just four classes. (Wolfram, 2002, p.231) The classes are
numb
ered in order of increasing complexity, and are as follows:




Class 1



Class 1 behaviour is very simple. Almost all initial conditions


lead to exactly the same uniform stable state.




Class 2



With Class 2 behaviour, man
y different final states are possible.


Nevertheless each final state consists of a certain set of simple


structures that either remain the same indefinitely, or repeat


pe
riodically.




Class 3



Class 3 behaviour is more complicated, and appears in many respects


random, although white triangles and other small structures are seen.




Class 4



Class 4 behaviour consists of a mixture of order and random
ness.


Localised structures are produced, which on their own are fairly simple.


However these structures move around and interact with each other in


very complicated ways. (
Wolfram, 2002, pp.231
-
235)


25

Wolfram discovered these different classes of system 19 years ago. The classification
was based originally upon the general appearance of the different systems. With a
greater understanding of the detailed properties of these dif
ferent systems, it turns out
that there is a good correlation between the properties of a system and the class to
which that system belongs.


By using a cellular automaton that was based upon a continuous parameter, Wolfram
was able to investigate what wo
uld happen if he varied this parameter smoothly.
Although he normally found stretches of Class 1 and Class 2 behaviour, and some
Class 3 behaviour, he discovered that
Class 4 behaviour was typically seen at the
transitions
.

(Wolfram, 2002, pp.242
-
245)


Wo
lfram tested the sensitivity of each class to initial conditions by changing a single
cell in the initial conditions. The results were as follows:




In
Class 1

systems any changes
always die out
. The same final state is reached
regardless of the initial con
ditions.



In
Class 2

systems the changes may persist, but they always remain
localised
in a
small region of the system.



In
Class 3
systems any change that is made typically
spreads out

at a uniform
rate
, eventually affecting every part of the system.



In
Cla
ss 4

systems changes can also spread, but
only in a sporadic way
.


(Wolfram, 2002, pp.250
-
252)


What is the significance of these different responses to the simplest chang
e in the
initial conditions?


These different responses reveal the basic differences in the way that each class
of system handles
information
.




In
Class 1
systems the information about the initial conditions is rapidly forgotten.
Whatever the initial con
ditions were, the system quickly evolves to a single final
state that shows
no trace

of the initial condition.




In
Class 2
systems some information about the initial conditions is retained in the
final configuration of structures. However this information
always remains
completely localised
. It is never communicated in any way from one part of the
system to another part of the system.




In
Class 3
systems
long
-
range communication

of information is shown. Any
change that is made anywhere in the system will al
most always be communicated
eventually to the most distant parts of the system.




In
Class 4
systems long
-
range communication of information is, in principle,
possible. Nevertheless it does not always occur. The reason for this is that any
particular change

that occurs in one part of the system can
only be

communicated

to other parts of the system,
if that change occurs in one of the local structures
that move across the system.
In this way, Class 4 behaviour is intermediate
between Class 2 and Class 3 behav
iour.


26

There are many differences between the four classes of system. However the
difference in the way in which each class handles information seems to be of
fundamental importance. It is often possible to understand some of the most important
features of

the systems in Nature simply by looking at how these systems handle
information. (Wolfram, 2002, p.252)


The behaviour of
Class 2
systems is always eventually repetitive; and Class 2 systems
never support any kind of long
-
range communication. The absence
of long
-
range
communication effectively forces each part of the system to behave as if it were a
system of limited size.


Thus any system of a limited size that involves discrete elements, and that follows
definite rules, must always eventually exhibit re
petitive behaviour.


It is this phenomenon that is ultimately responsible for much of the repetitive
behaviour that one sees in Nature. (Wolfram, 2002, p.255)

Nevertheless the
period

of repetition is
different
in different cases.


A common feature of sys
tems of limited size is that the
repetition period

greatly
depends upon:


a.

The
exact size

of the system.

b.

The
exact rule

it follows.


The repetition period is maximal whenever the number of positions moved at each
step shares
no
common factor with the total
number of possible positions that could
be moved. This is achieved when either of these quantities is a prime number.


The
actual
repetition period changes considerably as the size of the system
changes
.


The maximum
possible

repetition period for any sy
stem is always equal to the
total number of possible states of the system.



With
n

cells, there are 2
n

possible states. This number increases very rapidly with the
size of
n
. For example:



If
n

= 5, then there are 32 possible states.



If
n

= 10, then th
ere are 1024 possible states.



If
n

= 20, there are 1,048,576 possible states.



If
n

= 30, there are 1,073,741,824 possible states.


(Wolfram, 2002, pp.257
-
258)


In
Fig.10
there are examp
les of cellular automata of limited sizes. The examples show
the size and the repetition periods of these cellular automata. In general,



A
rapid
increase in size is characteristic of
Class 3

behaviour
.


Of the elementary rules, only Rule 45
seems to yield periods that always stay close to
the maximum of 2
n
.


27

In all cases, there are considerable

fluctuations
in the periods that occur, as the

size

of
the system changes
.





Fig. 10

(Wolfram, 2002, p.259)


When a system, in principle, contains an infinite number of cells, it is still possible
that a particular pattern in that system will only grow to occupy a limited number of
cells. In such cases the pattern must repeat itself with
a period of at most 2
n

steps,
where
n

is the size (the number of cells) of the pattern. This is indeed what happens in
Class 2 systems with
random

initial conditions. Since different parts of the system do
not communicate with each other, they all behave a
s separate patterns of limited size.
In most Class 2 cellular automata these patterns are effectively only a few cells
across, so their repetition periods are necessarily quite short. (Wolfram, 2002, p.260)


28

Although random behaviour can result from random
initial conditions, as discussed
earlier in this text, Wolfram’s discovery was that some rules
generate
randomness
regardless of the initial conditions. One example of this is
Rule 30
.


By intrinsically
generating

randomness, such systems have a certain fu
ndamental
stability.


Whatever the initial conditions, the same overall randomness is produced, with the
same large
-
scale properties. In Nature there are many systems the apparent stability of
which is ultimately a consequence of just this kind of phenomen
on.

(Wolfram, 2002, p.266)


Rule 90

has the property of
additivity
, which implies that with
any
initial conditions,
the patterns that it produces can be obtained by simple
super
-
positions

of a nested
pattern. But, unlike Rule 30 and Rule 22, it
cannot

gen
erate randomness. Wolfram
was able to produce a random pattern using Rule 90, but this was a consequence of
the random initial conditions, and
not
due to Rule 90. (Wolfram, 2002, p.264)


However, there are
special conditions

that predispose to
unusual
beh
aviour from a
particular rule


even from Rule 30. Although
Rule 30

generates randomness from
simple and from random initial conditions, it is found that if the initial state is
uniformly white, Rule 30 yields white forever. It is also possible to find les
s trivial
initial conditions from which the Rule 30 cellular automaton will yield simple
repetitive patterns. In any cellular automaton it is inevitable that initial conditions that
consist just of a fixed block of cells repeated forever will lead to simpl
e repetitive
behaviour. In effect,
each block of cells

acts like a system of a
limited size
; so that for
a given block that is
n

cells wide, the repetition period that is obtained must be at the
most 2
n

steps. If one wants to find a short repetition period
, it is a question of whether
or not there is a block of cells that can produce it.

Another example is
Rule 126

with which Class 3 behaviour is usually produced. If
the initial conditions contain blocks consisting of pairs of black and white cells, Rule
1
26 produces nested behaviour similar to that produced by Rule 90. The reason for
this is that on alternate steps the arrangement of blocks in the Rule 126 cellular
automaton corresponds exactly to the arrangement of individual cells in the Rule 90
cellular

automaton. (Wolfram, 2002, pp.266
-
270)


Why does
Rule 90

yield nested patterns? The reason is that Rule 90 can
emulate itself

in the same way. The basic idea is to consider the initial conditions not as a sequence
of individual cells, but as
a sequence of

blocks
, each of which contains 2 adjacent
cells. With an appropriate form for these blocks, the configuration of blocks evolves
exactly according to Rule 90.


The fact that both individual cells and whole blocks of cells evolve according to
the
same
rule

means that whichever pattern is produced, it must have the same
structure, whether it is looked upon in terms of individual cells or in terms of
blocks of cells.


(Wolfram, 2002, pp.270
-
271)


There are only two ways in which this can be achieved; namely:



29



Either the pattern must be essentially
uniform
, or



It must have a
nested

structure.


The property of
self
-
emulation is rather rare among cellular automaton rules. Another
example is
Rule 150
. Using 2 colours and the nearest neighbours, Wolfram found that

Rule 90 and Rule 150 were the only fundamentally different
additive

rules.

(Wolfram, 2002, p.271)


Additive rules, however, are not the only rules that can emulate themselves. Blocks of
3 cells

can act as individual cells in
Rule 184
. With simple initial
conditions the rule
produces trivial behaviour; but with the appropriate choice of

nested
initial conditions,
the rule yields a highly regular nested pattern. With most rules, including Rule 90 and
Rule 150, such nested initial conditions typically yield t
he same results that would
have resulted from random initial conditions. The nested structure seen with Rule 184
can thus be seen as a consequence of the fact that Rule 184 can emulate itself.
(Wolfram, 2002, pp.272
-
273)


In random initial conditions any s
equence of black and white cells can be
present. However it is a feature of most cellular automata that, on subsequent
steps, the sequences that can be produced become
progressively more restricted
.



(
Wolfram, 2002, p.275)


With
Class 1

systems, each system has a uniform end state from almost all initial
conditions. The resulting configuration for each system can be thought of as an
attractor

for that particular cellular automaton evolution. The situati
on is similar to
that which occurs in mechanical systems, such as a pendulum. One can start the
pendulum swinging in
any
configuration, but the system will evolve in such a way
that the pendulum will end up in the same configuration; namely, hanging vertic
ally
downwards.


The final configuration of a
Class 2

cellular automaton is also an attractor. However
the attractor consists
not

of a single configuration, but rather of
all
configurations for
that particular automaton rule. For any particular configurati
on, there are generally
many different initial conditions that can lead to it. In a mechanical analogy, each
possible final configuration is like the lowest point in a basin: such that one can start a
ball rolling from any point in the basin, and the ball
will end up at the lowest point.
(Wolfram, pp.275
-
276)


For 1
-
D cellular automata, there is a rather compact way of summarising all of the
possible sequences of black and white cells that can occur at any given step in the
evolution of these cellular autom
ata; namely to construct a
network
in which each
sequence of black and white cells corresponds to a possible path.

(Wolfram, 2002, p.276) Examples of this are shown in
Fig.12
.



30




Fig. 12

(Wo
lfram, 2002, p.277)


The first network in each case represents random initial conditions. Starting from the
middle

node, one can go around the right loop or the left loop any number of times. At
step 2 in
Rule 255

the network has only one loop, since the o
nly sequences that can
occur with this rule are sequences that consist only of black cells. In contrast in
Rule
4
, all the sequences that can occur at step 2 are represented by the loops in a network
of two nodes. Starting from the
right

node, one can go a
round the loop any number of
times: each loop representing another white cell. At any point one can cross over to
the left node, and go around this loop any number of times: each loop representing
another black cell. (Wolfram, 2002, pp.276
-
277)


As differe
nt cellular automata evolve, the set of sequences that
can
evolve
becomes progressively smaller. Thus on successive steps, the networks become
more

complicated.


(Wolfram, 2002, p.277)


Some ex
amples of this are shown in
Fig.13
. (Wolfram, 2002, p.278)



31




Fig. 13
(Wolfram, 2002, p.278)


However, for both
Class 1

and
Class 2

systems the networks always continue to have
a fairly s
imple form. (Wolfram, 2002, pp.276
-
277)


At step 2 in the
Rule 126

cellular automaton in
Fig.14
, black cells can no longer
appear on their own; they must appear in blocks of 2 or more. At step 3, it is difficult
to see any change in the
picture

of the ce
llular automaton evolution. However, if one
looks at how the
network

is evolving, one finds that there is an infinite collection of
other blocks that are forbidden.

At later steps, one finds that the set of sequences that are allowed rapidly becomes
more c
omplicated; and this is reflected in the rapid increase in complexity of the
corresponding networks.


This kind of
rapid

increase in network complexity is a general characteristic of
most Class 3 and Class 4 rules.
(Wolfram, 2002, pp.278
-
279)



Of note is that
the
number

of nodes

seems to
increase at least
exponentially
.




32




Fig. 14
(Wolfram, 2002, p.279)


In
Fig.15

the pictures of four different cellular automaton rules are shown.
Each one
has the property that, from
random initial conditions
,

the
same

sequences can occur at
each step

throughout its evolution. If one starts with almost any other initial
conditions, rapidly increasing complexity in the sequences that are allowed is o
nce
again observed. (Wolfram, 2002, pp.279
-
280)







Fig. 15
(Wolfram, 2002, p.280)





33

Two crucial features of a
Class 4

system are that:



a.
There must always be certain features that can

persist forever

in it


b.
Some of these structures
move around
.


In Wolfram’s experience with many different rules, he has found that whenever
sufficiently complicated persistent structures occur, structures that can move around
can eventually be foun
d.


When the initial condition 54,889 was reached with the Code 1329 cellular
automaton, a rather different kind of structure appeared, as is shown in
Fig.18
.





Fig. 18

(Wolfram
, 2002, p.288)



The right
-
hand part of the structure repeats with a period of 256 steps; but as this part
of the structure moves to the right, it leaves behind a sequence of other persistent
structures. As a result of this, the whole structure grows forev
er, adding progressively
more and more cells.


34

However, when it reaches the initial condition 97,439, again there is unlimited
growth; but this time, the pattern that is produced is
very simple
.

(Wolfram, 2002, pp.287
-
288)


Examples of
unbounded growth

are illustrated in
Fig.19
:
each pattern

occurring at
different initial conditions
.




Fig. 19

(Wolfram, 2002, p.289)



It is a general feature of Class 4 cellular automata that, wit
h the appropriate
initial conditions, they can
mimic

the behaviour of all sorts of other systems.



(Wolfram, 2002, p.291)



35

A further feature of
Class 4

systems is t
hat individual structures can
interact
. In some
cases, one structure passes through another structure with only a slight delay. Often a
collision between two structures produces a whole cascade of new structures.
Sometimes the outcome of a collision is evi
dent in a few steps; often it takes a large
number of steps for the outcome to become obvious. Thus even if the individual
structures in Class 4 systems behave in fairly repetitive ways,
the interactions
between these structures can lead to behaviour of im
mense complexity.

(Wolfram, 2002, p.291)



DISCUSSION.


By comparing Wolfram’s work with that of Barabási et al, it would be reasonable to
conclude that the properties of a Class 4 system resemble those of a scale
-
free
network; and that it is highly likel
y that a scale
-
free network is a Class 4 system. This
being the case, it would then be possible to claim a greater understanding of why
different networks have the topologies that they do have, and of what the properties of
these different networks are lik
ely to be.


Barabási has shown that the
degree
of asymmetry

between the fitness
-
connectivity
products of the nodes is of central importance in the evolution of networks. Wolfram
has provided evidence that an
asymmetry in the rule
, by which a system evolves
,
will result in complex behaviour as that system evolves. In the examples of scale
-
free
networks that Barabási has shown, there are examples of biological systems on
markedly different scales. For example:

a. ~3 chemical reactions in molecules in the cel
l.

b. 14 synapses between the neurons in the brain of the worm,
Caenorhabditis


Elegans.
(Barabási, 2002, p.34)


If a scale
-
free network is a Class 4 system, then one would expect to see those
properties that are associated with a
Class 4 system in such a network; properties such
as:

a.

Continual growth.

b.

Persistent structures.

c.

Movement of some of these structures; and the interaction of these moving


structures with other structures.

d.

Periodic exchange of information between di
fferent parts of the system,


resulting in the development of new structures.

e. Changes in patterns of behaviour with changes in the size of the system.


Scale
-
free networks appear to have these properties. In addition, the scale
-
free
topology of
a network is evidence of organising principles that are acting at
each
stage

in the evolutionary process of that network.


In late 1999, Hawoong Jeong was able to extract data from the web site of the
Argonne National Laboratory outside Chicago. The data c
onsisted of the
metabolic
networks

of forty
-
three diverse organisms. Jeong carefully assembled the full
metabolic map for each of these forty
-
three organisms; and then set about
characterising these networks by calculating the number of reactions in which
each
molecule participated. All of the organisms showed a
scale
-
free

topology. Each cell

36

looked like a tiny web with a
few

molecules involved in the majority of the reactions
(the hubs of metabolism), while most of the molecules participated in only one or

two
of the reactions. Jeong found that there were
three degrees of separation

in the
metabolic networks of the cells in
each
of the organisms. Because these forty
-
three
organisms were of a different size, it had been expected that the degrees of separatio
n
would have been different. The measurements indicated that the degree of separation
in the cells might be the same for species of all sizes. In addition, most cells share the
same hubs; namely, water, Adenosine triphosphate (ATP) and Adenosine diphosphat
e
(ADP). (Barabási, 2002, pp.185
-
186)


However, there were

significant differences
.

By comparing the metabolic pathways
in all

forty
-
three organisms, they found that
only 4%

of the molecules

appeared in all
of them. (Barabási, 2002, p.187) According
to Barabási,



Though the hubs are identical, when it comes to the less connected molecules, all organisms have their own


distinct varieties. Life looks like a suburb in which each house was designed by the same architect, but different


bui
lders and interior designers were commissioned to offer the finishing touches from the material of the floor to


the size and make of the windows. In an aerial photograph all houses appear to look alike. The closer you get to


them, however, th
e more you start noticing the differences. (Barabási, 2002, p.187)


Although the metabolic networks in the cells of these different organisms were
similar, in that each network had a separation of ~3, and, in most cases, the networks
shared the sa
me three hubs, the vast majority of the other molecules in the metabolic
networks of the cells in these different organisms were
different
. They were not even
remotely identical networks, since they did not consist of identical nodes. The
properties

of a n
etwork depend upon the way the nodes are connected together; and
the way the nodes are connected together depends upon the
fitness
-
connectivity
product
of

each
node
. It would seem to be an enormous assumption to make that
each non
-
hub molecule in the metab
olic network of each cell in each of these different
organisms had exactly the same fitness
-
connectivity product. In addition, the
existence of a scale
-
free network implies that the most highly connected nodes have
only a modest competitive lead; and, as W
olfram has pointed out, complex behaviour
requires some kind of interaction between the different elements, with many nodes
active at the same time. It seems much more likely that the networks are different,
because

they consist of different molecules.


T
his highlights the tendency to confuse the universal properties of systems with the
specific

properties of a
specific
system. It is often in the
details
of the systems that the
differences exist.

However, one important universal property of both random and

scale
-
free networks is
their ability to change. Indeed much of their behaviour is indicative of change.
Without change there is no dynamic; there is no progress. To maintain that ability to
change, the network requires a degree of asymmetry between the fi
tness
-
connectivity
products of the nodes. Even in scale
-
free networks, however, there is only a
modest

difference
between the fitness
-
connectivity products of the hubs and those of the
other nodes. Where there is an enormous difference between the fitness
-
connectivity
product of one node and that of each of the other nodes, the network has a star
topology. In such networks there is virtually no competition, and therefore there is
very little change.



37

Our awareness of the need for an environmental ethic has

evolved out of our
awareness of the impact that our behaviour is having on other species. The rate of
species loss has been estimated to be on a par with that which occurred during the last
mass extinction of almost 65 million years ago. (Wilson, 1998, p.
328) The question is
how can we alter our behaviour so as to lessen our impact on other species? Both
Leopold and Worster have pointed out that the rates of change in biological evolution
are not always compatible with our technical and economic rates of c
hange. One
example of this is the invention of the chain saw. It is unlikely that the rates of change
in biological evolution will increase to a level that is compatible either with many of
the consequences of our present technical prowess, or with our abi
lity to consume.
Therefore it is necessary that
our species

adapt so that the structure of the network, in
which our species has evolved, is conserved. The right action in Aristotelian virtue
ethics is the action that is at the appropriate mean between opp
osing extremes of
behaviour. Since
imposing

change and
not allowing

change are opposing extremes of
behaviour, the right action must be to allow change. Therefore, a suitable
environmental ethic might be to
allow

change rather than to impose it.


Such an e
thic might be suitable as a universal principle, but it may not provide
sufficient criteria on which to make decisions about the best action to take in
particular situations that involve the interaction of human and
non
-
human systems. As
has been shown, sy
stems differ, because the details of each of those systems differ.
Thus one must consider not only universal principles, but also specific details.


In making decisions in Western traditional ethics, one must consider not only the
interests of the majorit
y, but also the interests of the individual. Ethical dilemmas
often exist where these particular interests conflict. The utilitarian ethical theory
serves to protect the interests of the majority; whereas Human Rights, founded upon
the deontological ethica
l theory, serve to protect the basic interests of the individual
from tyranny by the majority. However, when the interests of
non
-
human systems
conflict with the interests of human systems, the protection of the interests of these
non
-
human systems, both l
iving and non
-
living, from tyranny by humans who are
serving their own interests, is almost absent in Western traditional ethical theories. In
addition, when the interests of the majority are being considered, this almost always
refers
only
to the interest
s of the majority of humans. Since ethical theories aim to
offer guidelines on decision
-
making, by determining which factors should be taken
into account in deciding which is the right action to take in particular situations, it
becomes obvious that an
asy
mmetry
, or bias, in the selection of the factors that are to
be taken into account in the decision
-
making process will affect the decision that is
reached. This asymmetry in the decision
-
making process by which humans decide
how to act must affect how huma
ns do act. A more profound question is ‘Do we act
the way we do act
because

of this degree of asymmetry, or partiality, in the decision
-
making process?’


In ethical considerations it is always pertinent for one to question the presence of any
partiality wi
thin the decision
-
making process. The reason for this is that in order to
reach a fair or just decision, the person making that decision should reach that
decision
impartially
,

by giving equal consideration to the interests of each party
where there is a c
onflict of interests between the different parties involved. Amartya
Sen has highlighted a basic distinction between two quite different ways of invoking
impartiality. According to Sen,


38


The procedures involve disparate interpretations of the demands o
f impartiality, and can


correspondingly have rather dissimilar substantive implications. The two approaches will be called


closed
and
open
impartiality respectively. ….


With
closed impartiality
, the procedure of making impartial judgmen
ts invokes only members of the


focal group itself. For example, the Rawlsian method of “justice as fairness” uses the device of an


“original contract” between the citizens of a given polity. …. In contrast, in the case of
open


impart
iality
, the procedure of making impartial judgments can (and in some cases, must) invoke


judgments inter alia from outside the focal group. For example, in Adam Smith’s use of the device


of “the impartial spectator”, the demands of impartial
ity require the invoking of disinterested


judgments of “any fair and impartial spectator”, not necessarily (indeed sometimes ideally not)


belonging to the focal group. Both approaches demand impartiality, but through different


procedures
, which can substantially influence the reach as well as the results of the respective


methods. (Sen, 2002, pp.445
-
446) ….



[He continues] Indeed, it may emerge that what is taken to be perfectly natural and normal in a


soc
iety cannot survive a broad
-
based and less limited scrutiny. …. Smith’s insistence that we must


inter alia view our sentiments from “a certain distance from us” is, thus, motivated by the object of


scrutinizing not only the influence of vested

interest, but also the impact of entrenched tradition and


custom. …. An impartial assessment requires not only the avoidance of the impact of individual


vested interests, but also an exact scrutiny of parochial moral and social sentiments, w
hich may


influence the ideas and outcomes in locally separated “original positions”
5
. (Sen, 2002, pp.458
-
459)



[He concludes] The procedure of closed impartiality, particularly exemplified by contractarian


devices applied to closed group
s, can involve a strictly partial approach to impartiality. It suffers, as


a result, from a number of distinct problems, of which “exclusionary neglect” is one. …. [It] can


suffer from other serious problems as well, including “procedural paroc
hialism” and “inclusionary


incoherence”. Since these limitations have not yet received much examination at all, they demand


greater


and clearer
-

recognition … . In tackling each of these problems, the alternative of open


impartiality h
as some merit … . (Sen, 2002, p.469)




It would appear that our ethical decision
-
making process is one of
closed
impartiality.
Thus, if we are aiming to act more cooperatively and more socially within a much
wider framework, we need to inc
lude the interests of
non
-
human systems as well as
the interests of human systems in our decision
-
making process; and to decide more
impartially

how we are to act.


As has been shown, it is necessary to consider not only universal principles but also
spec
ific details in our decision
-
making process. Thus it would be useful to take an
example and to try to scrutinize it from the perspective of “the impartial spectator”.

A pertinent example would be the morally controversial issue of the use of animals as
m
odels for the human condition in scientific experiments.


The use of animals in scientific experiments is
central

to the current biomedical
paradigm. The reason that animals are used as models for the human condition in
scientific experiments is that it is

considered to be immoral to experiment on a human
being without the consent of that human being. Nevertheless it has been considered to
be morally acceptable to experiment on a
non
-
human sentient being without the
consent of that
non
-
human sentient being.

The moral argument used to support this
action is that the benefit to humans greatly outweighs the cost to the animals. This is a



5

The “original position” refers to the Rawlsian hypothesis in which members of a focal group were
required to
decide how to distribute resources within their group from behind a ‘veil of ignorance’. In
other words, the group members did not know in advance what their positions in the group would be.


39

utilitarian argument. Let us try to examine this from the perspective of “the impartial
spectator”. One could begin by asking

the following questions:


a.

Are the interests of both the
non
-
human and the human animals being given
equal consideration?

b.

What are the exact benefits to humans?

c.

What are the exact costs to the
non
-
human animals?

d.

Are the
non
-
human animals, who are being use
d in these experiments, relevant
models for the human condition?


It would be reasonable to give a negative answer to question a., even if it was only on
the basis that it has been considered necessary to acquire the consent of a human
being, and yet consi
dered
not
necessary to acquire the consent of a
non
-
human
sentient being.

Hugh LaFollette and Niall Shanks have researched the issue of animal
experimentation, and have arrived at the conclusion that although there have been
benefits for humans, the clai
ms about the benefit to humans has been exaggerated.
They were unable to offer an accurate estimate of the benefit to humans, for the
simple reason that there has been no accurate accounting


no historical record


of
these benefits. Nevertheless they did

come to the conclusion that the claims of the
benefit to humans had been exaggerated for the following reasons:


a.

Some medical historians had provided evidence to show that there had been a
substantial improvement in Public Health, with reduced mortality,
before

modern interventionist medicine was introduced. This improvement in Public
Health had been the result of an improvement in nutrition and an improvement
in sanitation.


b.

Some of the claims made by the defenders of the practice of animal
experimentatio
n, with the purpose of showing how successful the practice had
been, were found to contain flawed arguments.


c.

There was some doubt about the relevance of
non
-
human animals as models
for the human condition.



With regard to the cost to the animals, one mus
t consider not only the number of
animals, but also the
actual

cost to
each individual
animal in terms of pain, mental
and physical stress, lack of freedom / lack of control over lifestyle choice, and,
ultimately for most, the loss of life itself. The esti
mates of the cost to the animals, in
terms of numbers, have been presented on a statistical basis.


The statistics for the total number of scientific procedures carried out on living
animals in the UK in 2001 was 2,622,442. This number does not include the

total
number of animals bred for experimentation; that is, those animals that were not
experimented upon and that were euthanased. The statistics of the scientific
procedures carried out on living animals in the UK in 2001 are shown in
Table 4
.




40

Table 4



The 2001 Statistics of Scientific Procedures on Living Animals

Information from the Home Office Statistics of Scientific Procedures on Living Animals,
Great Britain, 2001.



Categories

Numbers

Change Since 2000







Statistics on Species Used





Tot
al number of procedures on animals

2,622,442

DOWN 3%

* Total number of individual animals used

2,567,713

DOWN 3%

Procedures on cats

1,580

DOWN 13%


in respiratory & cardiovascular research

18

UP 1700%
(Previously 1)


in nervous system or specia
l senses research

188

DOWN 36%


in toxicity tests

12

DOWN 94%

* Number of individual cats

731

UP 19%

Procedures on primates

3,986

UP 8%


(incl. macaques: 2,597; marmosets & tamarins: 1,339; other monkey: 50)






in toxicity tests

2,466

DOWN
4%


in nervous or special senses research

789

UP 53%


in respiratory or cardiovascular research

420

DOWN 87%

* Number of individual primates

3,342

UP 13%

Procedures on dogs

7,945

UP 4%


in respiratory & cardiovascular research

1,317

UP 1%


in toxicity tests

5,332

DOWN 35%

* Number of individual dogs

5,554

UP 17%

Procedures on mice
(63% of total)

1,657,657

UP 3%

Procedures on amphibians

15,850

UP 2%

Procedures on rabbits

33,741

DOWN 15%

Procedures on birds

126,858

UP 2%

Procedures

on horses, donkeys or crossbreds

8,805

DOWN 5%

Procedures on fish

171,092

DOWN 30%

Procedures using transgenic animals
(24% of total)

630,759

UP 8%

Procedures using animals with a harmful genetic defect

246,844

DOWN 4%







Statistics on Research F
ield





Testing for toxicity, safety or efficacy
(17% of all procedures)

455,466

UP 0.1%

Fundamental research
(30% of all procedures)

788,651

DOWN 11%

Number of procedures without anaesthetic
(59% of total)

1,551,071

DOWN 5%

Research, development a
nd safety testing of
pharmaceuticals

685,896

DOWN 7%

Toxicity testing of cosmetics
(no longer permissible in the UK)

0

SAME

Procedures in education & training

5,773

DOWN 4%

Tobacco research

48

DOWN 21%

Alcohol research

3,081

SAME

Toxicity tests of fo
od additives and other foodstuffs

3,470

DOWN 42%

Toxicity tests of household products

590

DOWN 52%

Toxicity tests of industrial substances

52,685

DOWN 2%

Toxicity tests of agricultural chemicals
(incl. 128 beagles)

40,998

UP 16%

Toxicity tests for envi
ronmental pollution

38,245

UP 9%

Lethal short
-
term toxicity tests

122,286

UP 27%

Toxicity procedures required by legislation, British &
overseas

389,779

UP 2%

Cancer research

268,757

UP 4%


(includes 106,594 genetically modified animals; 68,027 ani
mals with
genetic defect)





Tests for cancer
-
causing chemicals

9,157

DOWN 19%


41

** Interference with organs of sight, hearing, smell or
taste

16,478

UP 5%

** Injection into brain

25,043

DOWN 7%

** Interference with the brain

31,041

DOWN 20%

** Proced
ures deliberately causing psychological stress

10,282

DOWN 1%

** Procedures involving aversive training

10,920

UP 72%

** Exposure to radiation

7,225

DOWN 40%

** Thermal injury

363

UP 274%

** Physical injury to mimic human injury

6,496

DOWN 17%

Eye irr
itancy tests

1,457

DOWN 26%

Tests for fever
-
causing potential
(pyrogenicity)

11,749

DOWN 12%

Psychology research

37,873

DOWN 65%

Production of monoclonal antibodies

6,018

DOWN 2%







Other Statistics





Number of project licence holders
(in charg
e of research
projects)

4,123

UP 3%

Number of Home Office Inspectors

21

SAME




The difference between numbers of procedures and numbers of animals is due to some animals
being used in more than one procedure.

** The Home Office has listed these proced
ures as being of particular interest. (Hadwen, 2001)


The statistics for the number of scientific procedures carried out on living animals in
the UK in 2002 are shown in
Table 5
.



Table 5


The 2002 Statistics of Scientific Procedures on Living Animals

Information from the Home Office Statistics of Scientific Procedures on Living Animals,

Great Britain, 2002.



Categories

Numbers

Change Since 2001







Statistics on Species Used





Total number of procedures on animals

2,732,712

UP 4%

* Total numb
er of individual animals used

2,655,876

UP 3 %

Procedures on cats

1,395

DOWN 12%

* Number of individual cats

616

DOWN 16%

Procedures on primates

3,977

DOWN 0.2%


in toxicity tests

2,790




in nervous or special senses research

620




in re
spiratory or cardiovascular research

298



* Number of individual primates

3,173

DOWN 5%

Procedures on dogs

7,964

UP 0.2%


in respiratory & cardiovascular research

1,195




in toxicity tests

4,999



* Number of individual dogs
(7,664 beagles an
d 300
crossbreeds)

5,746

UP 3%

Procedures on mice
(63% of total)

1,720,253

UP 4%

Procedures on amphibians

15,355

DOWN 3%

Procedures on rats

509,647

UP 2%

Procedures on guinea pigs

45,568

DOWN 4%

Procedures on hamsters

6,240

DOWN 15%

Procedures on rab
bits

30,280

DOWN 10%


42

Procedures on sheep

33,610

UP 79%

Procedures on pigs

8,453

UP 44%

Procedures on birds

138,347

UP 9%

Procedures on horses, donkeys or crossbreds

8,002

DOWN 9%

Procedures on fish

181,953

UP 6%

Procedures using transgenic animals
(26% of total)

709,979

UP 12%

Procedures using animals with a harmful genetic defect

259,898

UP 5%







Statistics on Research Field





Testing for toxicity, safety or efficacy
(18% of all
procedures)

485,767

UP 7%

Fundamental research
(31% of al
l procedures)

864,277

UP 11%

Number of procedures without anaesthetic
(60% of total)

1,634,771

UP 5%

Research, development and safety testing of
pharmaceuticals

660,390

DOWN 4%

Toxicity testing of cosmetics
(no longer permissible in
the UK)

0

SAME

Pr
ocedures in education & training

5,364

DOWN 7%

Tobacco research

0

DOWN 100%

Alcohol research

2,330

DOWN 24%

Toxicity tests of food additives and other foodstuffs

5,414

UP 56%

Toxicity tests of household products

1,032

UP 75%

Toxicity tests of indust
rial substances

42,280

DOWN 20%

Toxicity tests of agricultural chemicals
(incl. 440 beagles)

57,804

UP 41%

Toxicity tests for environmental pollution

38,214

SAME

Lethal short
-
term toxicity tests

135,626

UP 11%

Toxicity procedures required by legislati
on, British &
overseas

421,603

UP 8%

Cancer research

258,145

DOWN 4%

Tests for cancer
-
causing chemicals

19,109

DOWN 109%

** Interference with organs of sight, hearing, smell or
taste

16,628

UP 1%

** Injection into brain

26,415

UP 5%

** Interference w
ith the brain

29,039

DOWN 6%

** Procedures deliberately causing psychological stress

9,804

DOWN 5%

** Procedures involving aversive training

7,648

DOWN 30%

** Exposure to radiation

9,199

UP 27%

** Thermal injury

28

DOWN 92%

** Physical injury to mimic

human injury

11,769

UP 81%

** Inhalation

47,485

UP 8%

Eye irritancy tests

1,271

DOWN 13%

Tests for fever
-
causing potential
(pyrogenicity)

10,872

DOWN 7%

Psychology research

39,642

UP 5%

Production of monoclonal antibodies
(none produce by
ancites in

living animals)

4,320

DOWN 28%







Other Statistics





Number of project licence holders
(in charge of research
projects)

4,149

UP 0.6%

Number of Home Office Inspectors

25

UP 19%




The difference between numbers of procedures and numbers of anima
ls is due to some
animals being used in more than one procedure.

** The Home Office has listed these procedures as being of particular interest.


(Hadwen, 2002)



This degree

of accuracy in accounting is perhaps exceptional.

Barbara Orlans has presented a list of the number of laboratory animals used in
research by different countries. The list is shown in
Table 1
.


43









Table 1

(Orlans, 1998, p.401)


Orlan notes that:


The annual total of over 41 million animals used is an underestimate. This is because many countries
that use animals for experimentation do not count the numbers us
ed. No data are available from several
countries in Europe, the Middle East, Africa, South America, Asia and other regions. The numbers of
animals are given to the nearest thousand. Figures in parentheses indicate year of count. Because of
different criter
ia for counting (in the United Kingdom, for instance, procedures are counted rather
numbers of animals), the figures may not be directly comparable from country to country. Numbers
represent official national statistics, except for the United States and Ja
pan; see comments below.


*The United States Department of Agriculture counts only about 10 per cent of all animals used in
experimentation. The most used species


rats, mice and birds


are not protected under the relevant
legislation and are therefore n
ot counted. In 1996, the number of animals officially counted was
1,345,700. For this table, this figure has been multiplied by ten to achieve approximate comparability
with data from other countries.


** Number of animals sold (not necessarily used). Sour
ce: Japanese Society of Laboratory Animals.


(Orlan, 1998, p.401)


In 1992 the American Medical Association (AMA) estimated that, in the USA,
biomedical

researchers experiment on
17
-
22 million

animals
each
year. Other people
believe that the number of animals used is higher. (LaFollette & Shanks, 1996, p.vii)
Indeed LaFollette and Shanks found that the estimates of the number of animals used
varied widel
y, partly because the most commonly used animals


rats and mice


were not even counted. (LaFollette & Shanks, 1996, p.267)

It would appear that there has been very little accounting of the benefit to humans. It
would also appear that although there has
been some accounting of the costs to the
animals, that accounting, for the most part, has been an accounting with very little
accuracy. Thus one could reasonably conclude that the moral argument that the

44

benefit to humans greatly outweighs the cost to the
animals is
not
supported by
accurate empirical data.


Our fourth question concerns the
relevance

of
non
-
human animals as models for the
human condition. The question of relevance is of particular importance. Within the
moral argument that the benefit to h
umans greatly outweighs the cost to the animals is
a hidden premise


an embedded assumption
-

that
non
-
human animals are relevant
models for the human condition. If
non
-
human animals are not relevant models for the
human condition, then the argument would

contain a false premise and therefore the
argument would be unsound.


It so happens that the point of relevance has also become a central problem in the
biomedical paradigm. A methodological schism has developed in biomedical theory
between the reduction
ist physiologists and the evolutionary biologists; and the point
of contention is that the evolutionary biologists have accused the reductionist
physiologists of not taking the consequences of the evolutionary process into account.
To understand the basis
of the differences of opinion, one has to look at the origins of
the modern biomedical paradigm. The founder of the present biomedical method was
a nineteenth century physiologist named Claude Bernard. Bernard’s understanding of
the nature and the role of
clinical medicine were framed by the nineteenth century
debate between the hypothetico
-
deductivists and the inductivists. The hypothetico
-
deductivists believed that the distinction between the context of discovery and the
context of justification marked a
crucial methodological divide. They claimed that a
hypothesis is confirmed, or made probable, if it explains and predicts a variety of
scientifically significant phenomena. The inductivists claimed, on the other hand, that
it was not enough that the hypoth
esis accurately predicted the laboratory results. They
believed that there had to be an independent warrant between the hypothesis and the
phenomenon that it explained and predicted; otherwise the hypothesis may be no
more than an artefact of the investiga
tor’s fancy. They believed that hypothetico
-
deductivism was methodologically flawed, since it might be possible to confirm two
conflicting hypotheses by the same evidence. It was in this environment that Claude
Bernard asserted the primacy of interventioni
st laboratory science, and denigrated
clinical observational medicine. He wanted the physiologists to share the chemists’
and physicists’ commitment to
controlled

laboratory experimentation.

(LaFollette & Shanks, 1996, pp.37
-
38)

Bernard was a causal dete
rminist. He believed that if medicine was based upon
statistics, it would never be anything more than a conjectural science; and only by
basing itself on experimental determinism could medicine become a true science. He
believed that if seemingly identical

systems behaved differently, then there had to be a
difference in the initial conditions that accounted for this difference. Bernard was well
aware of species differences, however he
assumed

that all laws were deterministic and
uniform
across nature. He s
pecifically rejected the very idea of fundamental or
irreducible species differences, and further rejected the idea of genuinely statistical
laws. (LaFollette & Shanks, 1996, pp.41
-
42) Crucial to the understanding of
Bernard’s view is his distinction betwe
en the cause of a phenomenon, and the means
by which that phenomenon is produced. According to Bernard,


The determinism, i.e., the cause of the phenomenon, is therefore single, though the means for making it
may be multiple and apparently very various …
But the real and effective cause must be
constant

and
determined
. (LaFollette & Shanks, 1996, p.44)



45

Bernard was committed to a reductionist methodology. For Bernard, species
differences were not ultimately qualitative, but
quantitative
.

(LaFollette & Sha
nks, 1996, pp.45, 47)

In the nineteenth century leading French biologists, such as Bernard, were resistant to
the Darwinian theory of evolution; and one of the reasons for this resistance was that
the theory of evolution was not entirely satisfactory to ei
ther the hypothetico
-
deductivists or the inductivists. According to LaFollette and Shanks,


It is important to understand that Bernard rejected evolutionary biology, and that his rejection of the
theory springs from his methodological commitments. This Ber
nardian paradigm was shaped by the
deterministic, “bottom
-
up” approach of the nineteenth century physical sciences. This commitment to
determinism helps explain his rejection of evolution, and, consequently, his belief that species
differences could be saf
ely ignored. (LaFollette & Shanks, 1996, p.52)


According to LaFollette and Shanks, the perspective of most biological researchers is
as follows:


[Researchers] think clinical and epidemiological studies on humans are scientifically second
-
rate. In
well
-
d
esigned experiments researchers strictly control all relevant variables. Then they introduce some
stimulus and record the results. Without those controls, the experimenter cannot know exactly what
caused the results. However, by using strict controls, expe
rimenters can gain crucial causal
information. …. Humans live different lifestyles and in different environments. Thus they are
insufficiently homogeneous to be suitable experimental subjects. These
confounding factors

undermine
our ability to draw sound c
ausal conclusions from human epidemiological surveys. Confounding
factors are variables (known and unknown) that make it difficult to isolate the effects of the specific
variable being studied. (LaFollette & Shanks, 1996, pp.19
-
20)


There is practically n
o way of replacing animals in these investigations and so
-
called “alternative
methods” are in reality merely complementary. Tissue cultures, cell, micro
-
organisms, enzymes,
membranes, mathematical models


all are useful for preliminary screen tests and fo
r testing
hypotheses, but the complexity of a living organism is such that
in vivo

studies are essential before any
test can be responsibly made in man. [According to Garattini and van Bekkum in 1990]

(LaFollette & Shanks, 1996, pp.7
-
8)


Thus crucial to ou
r question of relevance is whether or not Bernard’s view is correct.





The two principle networks in any living system are:


a.

The
protein
network, which consists of:


-

the
genetic
(regulatory) network, and


-

the
protein
-
protein

network.


b.

The
metabolic

n
etwork.


Pair correlations are the correlations of the degrees
6

of nearest
-
neighbour nodes in
a network; and they are characterised by a joint distribution of degrees of the
nearest
-
neighbours, P (k, k
1
). The pair correlations are absent in Classical rando
m
graphs, and in many other equilibrium networks, but occur naturally in
growing




6

The degree is the number of connections. The total number of co
nnections of a vertex is called its


degree k
. (Dorogovtsev & Mendes, 2003, p.6)


46

networks. (Dorogovtsev & Mendes, 2003, p.49) Maslov and Sneppen
investigated the pair correlations in both the protein physical interaction network
and the genetic transcript
ion regulatory network of the yeast
S. cerevisiae
, and
found strong pair correlations in
both
protein networks. (Dorogovtsev & Mendes,
2003, p.60) According to Dorogovtsev and Mendes,



The situation is very similar to the Internet. …[M]ost of the high
ly connected vertices
7

have nearest


neighbours of low degrees. This similarity shows that, like the Internet, protein networks are non
-


equilibrium ones. Each of them may be treated as an intermediate state of a long evolutionary


process
. The observed architecture of the protein networks of the yeast, which combines a fat
-
tailed


distribution of connections and the small
-
world compactness, has two evident consequences:


robustness against mutations and shortness of regulatory pa
thways. There are few doubts that this


optimal structure is inherent for all protein networks. (Dorogovtsev & Mendes, 2003, p.60)


Thus the

protein
-
protein

network
and the

genetic

transcription regulatory

network

both appear to be
scale
-
free network
s
, and are likely to be Class 4 systems.


The genomes of different species are different; and sometimes they are radically
different. Even when there is a difference of less than 2% in the genes present in the
DNA of two different species, as there is bet
ween chimpanzees and humans
8
, there is
a substantial difference in the morphological structure of the phenotypes of these
different species. This is hardly surprising, since, if the nodes in a network are
different, then the way the nodes are connected is
likely to be different; and the
properties of a network depend upon the way the nodes are connected. In addition, it
has been shown that the
metabolic

networks

are also
scale
-
free networks
; and that
although the metabolic networks in the individual cells o
f different species may share
the same hubs, the metabolic networks in the cells of the 43 different species
examined by Hawoong Jeong shared
only
4%

of the same molecules, and they were,
therefore, substantially
different

networks.

From recent research i
nto network evolution, it is believed that the scale
-
free topology
of a network is evidence of organising principles that are acting at
each stage

in the
evolutionary process of that network. Thus in considering each organism, one would
need to consider, a
t least:


a.

The genome of the species (or more specifically, the genome of that particular
member of that particular species).

b.

The specific type and number of molecules in the metabolic network of the
cells of that particular member of that particular specie
s.

c.

The specific protein

protein networks of that particular member of that
particular species.

d.

The physical and social factors that influence the development of those
networks at
each
stage

in their evolution.


As Wolfram has pointed out, the repetition p
eriod greatly depends upon the exact size
of the network (or sub
-
network); and varies as the size of the network (or sub
-
network) varies. Thus the repetition period for any network will depend not only upon
the exact rule

by which the network evolves, and
thus by the degree of asymmetry



7

A vertex is node. Physicists often use the terms ‘node’ and ‘site’, but the term ‘vertex’ is the standard


term used in both graph theory and computer science. (Dorog
ovtsev & Mendes, 2003, p.6)

8

In the 1980s, Charles Sibley and Jon Ahlquist discovered that 98.4% of human DNA is exactly the


same as chimpanzee DNA. (Fouts, 1997, p.53)


47

between the fitness
-
connectivity products of each of the nodes in the network, but also
upon
the exact size

of the network
at any point in time
.


Thus it would appear that the network evolves not only from the ‘bottom
-
up’ b
ut
also

from the ‘top
-
down’. It would also appear that there are
qualitative
as well as
quantitative differences between the different species. Therefore it would appear from
these more recent discoveries that the evolutionary biologists are indeed correct
, and
that Bernard was
incorrect
. There seems to be a reasonable case on scientific grounds
for questioning the
relevance

of the use of animals in many, if not most, scientific
experiments. In addition, the moral argument used to justify the use of
non
-
hum
an
animals as models for the human condition would appear to contain a false premise,
and therefore this argument is unsound.


Leopold has pointed out that, by and large, our present problem is one of
attitudes
as
well as implements. Traditional ethical th
eories underpin our moral outlook, our legal
philosophy and our public policy decisions. The question remains: ‘Do we act the way
we do act
because

of this degree of asymmetry, or partiality, in our decision
-
making
process?’ The question is worth asking. I
f we wish to address the problems that exist
as a result of our present interactions with
non
-
human systems, we must first address
the reasons for our present attitudes and behaviour towards such systems.

















References.



Barabási, A.L.

(2002)

Linked: The New Science of Networks
,


(Cambridge, Massachusetts: Perseus Publishing)


Baars, B.J.
(2001) ‘There are no known differences in brain mechanisms of consciousness between


humans and other mammals’,




Animal Welfare
, Vol. 10 Supplement,


Consciousness, Cognition and Animal Welfare
, Proceedings of the UFAW Symposium,


Zoological Society of London
’s Meeting Rooms, London, May 11
-
12, 2000


Cann, A.J.
(1997)
Principles of Molecular Virology
,


2
nd

edn., (London: Academic Press)


Dorogovtsev, S.N. & Mendes, J.F.F.
(2003)
Evolution of Networks:



From Biological Nets to the Internet and WWW
,


48


(Oxford: Oxford University Press)


Dr Hadwen Trust
, (2002) ‘The statistics of scientific procedures on living animals’,



<http://www.crueltyfreeshop.com/drhadwen/stats.htm>


Fouts, R. & Mills, S.
(1997)
Next of Kin
, (London: Michael Joseph Ltd)


Jeong, H., Mason, S.P., Barabási, A.
-
L. & Oltvai, Z.N.

(2001)


‘Lethality and centralit
y in protein networks’, in
Nature
, Vol. 411, May 3, p.41.


Kant, I.
(1991)
The Moral Law: Groundwork of the Metaphysic of Morals
,


Translated and analysed by H.J. Paton, (London: Routledge)


LaFollette, H. & Shanks, N.

(1994) ‘An
imal Experimentation: the Legacy of Claude Bernard’,


International Studies in the Philosophy of Science
, pp.195
-
210,


<http://www.stanford.edu/dept/HPST/SciMedOrg/Sources/LaFolletteS
hanks.html>


LaFollette, H. & Shanks, N.

(1996)
Brute Science: Dilemmas of Animal Experimentation
,


(London: Routledge)


Leopold, A.
(1968)
A Sand County Almanac
, (London: Oxford University Press)


Ma
cnaghten, P.

(2001)
Animal Futures: Public Attitudes and Sensibilities towards Animals


and Biotechnology in Contemporary Britain
,


A report by the Institute for Environment, Philosophy and

Public Policy for


the Agricultural and Environmental Biotechnology Commission, October.

<http://domino.lancs.ac.uk/ieppp/Home.nsf/ByDoc1D/9220807C6AE23D8580256B6D00446793/$FIL
E/animal+futures.pdf >



Orlans, F.B.
(1998) ‘Hist
ory and ethical regulation of animal experimentation: an international


perspective’, in
A Companion to Bioethics
, H. Kuhse & P. Singer (eds.),


(Oxford: Blackwell Publishing)


Sen, A.

(2002) ‘Open and closed imp
artiality’, in
The Journal of Philosophy
, Volume XC1X, No. 9,


(New York: The Journal of Philosophy, Inc.), September.


Sherwin, C.M.

(2001) ‘Can invertebrates suffer? Or, How robust is argument
-
by
-
analogy?’,



Animal Welfare
, Vol. 10 Supplement,



Consciousness, Cognition and Animal Welfare
, Proceedings of the UFAW Symposium,


Zoological Society of London’s Meeting Rooms, London, May 11
-
12, 2000


Singer,

P.
(1995)
Animal Liberation
, (London: Pimlico)



Wilson, E.O.

(1998)
Consilience: The Unity of Knowledge
,


(London: Little, Brown and Company)


Wolfram, S.

(2002)
A New Kind of Science
, (Wolfram Media, Inc.)


Wolfram, S.

(1991)
Mathematica: A System for Doing Mathematics by Computer
,


2
nd

edn,

(Addison
-
Wesley)


Worster, D.
(1994)

Nature’s Economy: A History of Ecological Ideas
,


(Cambridge: Cambridge Universi
ty Press)










49