LNCS 3141 - The Genealogy of Biomimetics: Half a Century's Quest ...

prunemareΤεχνίτη Νοημοσύνη και Ρομποτική

14 Νοε 2013 (πριν από 4 χρόνια)

203 εμφανίσεις

A.J. Ijspeert et al. (Eds.): BioADIT 2004, LNCS 3141, pp. 496–512, 2004.
© Springer-Verlag Berlin Heidelberg 2004
The Genealogy of Biomimetics: Half a Century’s Quest
for Dynamic IT
Mikkel Holm Sørensen
Dept. Digital Aesthetics and Communication,
IT University of Copenhagen,
Glentevej 67, Copenhagen 2400 NV, Denmark
megel@itu.dk
Abstract. Biologically inspired approaches to the design of general IT are pres-
ently flourishing. Investigating the scientific and historical roots of the tendency
will serve to prepare properly for future biomimetic work. This paper explores
the genealogy of the contemporary biological influence on science, design and
culture in general to determine the merits of the tendency and lessons to learn.
It is argued that biomimetics rests on bona fide scientific and technical reasons
for the pursuit of dynamic IT, but also on other more external factors, and that
biomimetics should differentiate the relevant from the superficial. Furthermore
the search for dynamic capacities of IT that mimicking adaptive processes can
bring about is put forward as both the history and raison d’être of biomimetics.
1 Lifelike – á la Mode
Biology is enjoying enormous attention from different scientific fields as well as
culture in general these days. Examples are legion: The victorious naturalization proj-
ect in philosophy and psychology spearheaded by cognitive science in the second half
of the 20th century; the exploration of biological structures in the engineering of ma-
terials or architectures [1]; a dominant trend of organismoid designs with ‘grown’
curves replacing straight lines to convey a slickness and efficiency not previously
associated with life;
1
World Expo 2005 being promoted under the slogans “Nature’s
Wisdom” and “Art of Life”;
2

and biology’s new status as the successor of physics as
the celebrity science which gets major funding and most headlines.
These examples are neither historically unique nor culturally revolutionary. Life and
nature have been fetishized before. Yet the fascination with the living has never pre-
viously dominated with such universality and impetus, as we presently experience. So
we might ask: What is the reason for this ubiquitous interest in life and is it a result of
cultural and scientific progress or merely an arbitrary fluctuation soon to be forgotten
again?
In order to prepare properly for future biologically inspired approaches to IT design,
this paper investigates the roots of the biological dominance by reconstructing the

1
Think of cars, sports apparel, furniture, mobile phones, watches, sunglasses etc.
2
http://www.expo2005.or.jp/
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 497
recent history of techno-scientific ideas. A history that can be characterized as a pur-
suit of dynamic IT. The objective is to distill important lessons learned to provide
good conditions for the continued effort to develop dynamic IT through biomimetic
design by identifying the proper challenges to embark on and dead ends to avoid.
2 Biomimetics: Definition, Characteristics, and Motivations
The first step in this investigation of biologically inspired approaches to IT design
3
is
clarifying and qualifying the notion of biomimetics [2, 3]. ‘Biomimetics’ has been
chosen as the best unifying notion for biologically inspired approaches to design of
dynamic artifacts being intuitively descriptive and adequately precise. If the following
analysis reveals a specific or even idiosyncratic notion of biomimetic design, it hope-
fully nonetheless contributes to an increased awareness of the conceptual foundation
for biologically inspired approaches in general and helps prevent misunderstandings
and conceptual vacuity.
According to Miriam Websters online dictionary biomimetics is:
the study of the formation, structure, or function of biologically produced substances
and materials (as enzymes or silk) and biological mechanisms and processes (as
protein synthesis or photosynthesis) especially for the purpose of synthesizing similar
products by artificial mechanisms which mimic natural ones.
4
This definition covers two slightly different meanings of biomimetics:
1. The artificial synthesis of naturally occurring materials, substances or other struc-
tural configurations.
2. Mimicking biological processes in creating life-like products.
Both meanings concern the synthesis of specific materials or structures, i.e. the syn-
thesis of a certain ’end product’, and they merely differ in how directly and in which
manner the product is brought about. Biomimetics thus characterized is not an appro-
priate label for an IT design methodology. Instead I would like to put forward a defi-
nition of biomimetics more suitable for the approach:
3. The mimicking of complex self-organizing natural processes to obtain dynamic
artifacts harboring adaptive and self-maintaining capacities.
Whereas 1) and 2) concern the creation of fait accompli products, a biomimetic IT
design methodology 3) instead creates dynamic ‘produces’, i.e. evolutionary proc-
esses involving IT devices that adapt in use [4].
This does not mean that biomimetics is ‘anti-materialistic’. On the contrary, a better
integration of software and hardware will become an important objective for biomi-
metics. Firstly in an effort to enhance physical objects and spaces with digital dimen-

3
Design is a broader notion than engineering and covers all aspects of creating artifacts
(methodological, aesthetic, sociological etc.) whereas engineering only concerns the concrete
bringing forth of the artifact.
4
http://www.m-w.com/cgi-bin/dictionary
498 M.H. Sørensen
sions, providing novel functions free of spatial constraints [5]; secondly to develop
self-assembling, evolvable and re-configurable materials for a ‘deeply’ and ‘inte-
grated’ dynamic IT adaptive at both software and hardware level; thirdly in the study
of computation itself, with models integrating structural and topological characteris-
tics of natural computation as it occurs at the chemical level (e.g. ’lock and key’). In
addition integrative design becomes a necessity as increasingly miniaturized hardware
starts having idiosyncratic characteristics due to microphysical effects. Such minis-
cule hardware will have to be functionally coupled with software in some way and
collective ‘growth’ seems a good remedy for heterogeneous characteristics.
2.1 Biomimetics: Design for Dynamics
A biomimetic design methodology capitalizing on the self-organizing capacities of
evolutionary processes might appear to be a contradiction in terms. ‘Design’ normally
characterizes an intentional and teleological strictly human practice whereas natural
order arises ‘blindly’ and post hoc from variation, selection and retention cycles.
Biomimetic design thus either means ‘un-designed design’ or erroneously project
intentionality into adaptation.
According to [6] a crisp distinction between the teleology of design and the cau-
sality of evolution does not stand up to close inspection. The argument goes, correctly
(cf. [7, 8]), that because the human brain itself basically operates by amplification of
fluctuations (variation, selection and retention dynamics) thoughts and ideas are evo-
lutionary selected post hoc rather than deliberately created de novo. The difference
between cognition and other evolutionary dynamics therefore becomes merely onto-
logically regional and not intrinsic. If our designing skills in other words are just the
result of high level evolutionary processes there is no essential difference.
The middle way, which I will put forward, is that there is a significant difference
between human design and other evolutionary processes, if only in degree and not in
kind, but that this on the other hand does not render the idea of biomimetics incoher-
ent. Despite some terminological fuzziness and the merit of the argument of [6] with
respect to cognition, the notion of biomimetics nonetheless adequately covers the
specific design methodology under scrutiny here. Whatever the micro-processes un-
derlying cognition, there is a difference between the emergent macro-process of hu-
man deliberation and the agent-less achieving solutions merely by due means (e.g. by
an autonomously self-organizing technology). In fact a biomimetic methodology is
quite different from conventional design approaches, and the outcome no less differ-
ent. The notion of design simply changes when the role of agency in designing is
distributed and even hard to identify, as is the case for example with evolutionary
algorithms. The standard notion of design as top-down controlled act no longer holds
if parts of the design emerge from self-organizing processes [4]. Moreover identifying
the ‘agency’ responsible for a specific state of affairs is pivotal for psychological and
ethical issues related to technology and this becomes relevant with increasingly
autonomous technology.
The concept of biomimetics also needs qualification in a different sense. The principle
of nature primarily mimicked by a biomimetic approach to IT design - adaptive dy-
namics - is not exclusively biological. Much research suggests that the self-
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 499
organization of groups by variation, selection and retention is a universal ordering
principle governing basic physical laws as for example crystal growth to sociological
processes such as the proliferation of ideas [7, 8, 9, 10]. Biology is just one, albeit
very prominent, domain of evolutionary dynamics and happened to be the field first
described by such terms. Again this lack of terminological adequacy is not harmful if
the notion is deployed rigorously for the specific approach characterized in this paper.
On the basis of this terminological clarification, biomimetics can be characterized.
Biomimetics is a design methodology for complex artifacts, deployed to support hu-
man design with self-organizing evolutionary mechanisms. Biomimetics is a ‘meta-
governing’ approach in the sense that it retains human control of the overall func-
tional norms of artifacts while exploiting evolutionary processes to provide the func-
tions required (cf. [11]). Biomimetics provides simultaneously improvement of our
design and in specific circumstances reduction of the labor going into it by leaving
some parts of design to evolutionary self-organization. Biomimetics thus seeks to
capitalize on the respective (and complementary?) strengths of evolutionary processes
and human creative and teleological capabilities [29].
There is a range of reasons why a general bio-inspired tendency has arisen and most
probably will continue to grow within IT-research. Let us take a look at some of these
to get a better picture of the nature of biomimetic IT design.
5
One of the main challenges facing IT design is finding the means to develop and
maintain ever growing IT systems. IBM’s Autonomic Computing Project
6
is moti-
vated by estimations that the development and maintenance of future IT systems will
be impossible without new ways of designing such systems. IT needs to take care of
itself, and living systems provide so far the only examples of just this capacity. Life
has developed means to evolve, develop and learn by adaptive dynamics and since we
have got sciences concerned with the organization of adaptive systems - primarily
biology but also younger transdisciplinary fields such as dynamic and complex sys-
tems theory - it is instructive to consult models and theories from these fields when
developing future IT.
Second, our cognitive capabilities are evolutionarily constrained and we simply
cannot fully overview, let alone control, very complex structures or processes. History
is filled with examples of how technologies turned out differently than expected and
dispatching itself form our control, and we have no reason to believe that this is about
to change.
7
Who could predict that the surfing behavior resulting form the introduc-
tion of the remote control would change the very content of TV broadcasting [5]; that
SMS-organized mobs would bring about social change because of a simple feature on
mobile phones [12]. Faced with highly non-linearly dynamic and complex systems
our cognitive capacities are simply inadequate and leave us without a chance when
trying to analyze the long term and global consequences increasingly important with
growing systems. Add to this fluctuating user practices increasing proportionally to
the freedom technology provides. Acknowledging our limited powers we should

5
For supporting or additional reasons to apply biomimetics see [1, 2, 3, 4, 9, 11, 13]
6
http://www.research.ibm.com/autonomic/
7
The uncontrollable nature of technology is the very basis for the argument of ‘technology
determinism’ popular among luddites and variants of philosophies of technology.
500 M.H. Sørensen
abandon the notion of a fully controlled top down design process and join a fruitful
alliance with some of nature’s benevolent ordering principles.
Third, modern theories of complex adaptive systems have gained valuable knowledge
on dynamic processes and offer models for the local behavior of constituents as well
as global characteristics of systems. This conceptual toolbox will prove immensely
important in designing IT systems as nested and hierarchical complexes of systems
interacting on multiple levels. Further, the theories facilitate scaling to avoid disinte-
grated ‘stratified’ views.
8
The quality of future technology depends on a better general
understanding of interacting systems on different scales, from device-device to whole
networking societies. IT has to be designed in ways that accommodate rapidly
changing practices, mobile, long-distance and trans-media corporations and other
forms of (unforeseeable) changes of conditions. From the design of individual devices
to the general organization of IT systems architectures and protocols must be mutu-
ally supportive to carry biomimetics to its full strengths [2, 3].
3 Genealogy of 20th Century Bio-centrism
The present interest in life is not historically unique, but seems to occur periodically.
In relation to biologically inspired design of IT, technology has always been modeled
after as well as been model for the dominant conception of life. This dialectics stem
from our desire to understand and master nature. To (re-) produce is to comprehend –
verum et factum convertuntur – has been the credo through scientific history. Hence
by recreating life we might hope to get behind the veil of nature’s mystery and peek
into God’s workshop. The only variations in this perennial dream have been changing
époques metaphysical conceptions of life. For example the ancient sculptures created
in dirt thought to be one of the four basic elements, Hellenic hydraulic automata mod-
eling Aristotelian ‘motivation’ and ‘movement’ (‘movere’ is the etymological root of
both motion, emotions and motivation), the intricate mechanical animals and chess
players with the dawning mechanistic natural science, steam driven machines of the
19th century thermodynamics, the postwar computational robots and self-organizing
ALife at the turn of the millennium.
The bio-techno pivot is completed by the fact that the technological reproduction of
nature is fueled by mans perpetual religious and pragmatic awe for the ingenuity of
nature’s ‘design’. This awe is so firm that it has been difficult to convince people
(many are still not convinced!) that the ‘design’ of nature in fact emerges spontane-
ously by self-organizing processes without any teleological agency.
However scientific developments since the late 19th century paved the way for a
hitherto unparalleled blossoming of our fascination with the living and not least the
efforts to make good use of our insight into its governing principles. During the last
century science became increasingly preoccupied with systems, complexity, dynamics
and information. Phenomena that are all notoriously manifest in organisms and thus
biology naturally took the center of the scientific stage. At the same time pollution
entered the stage and sympathy for nature rose. After a following period of dichoto-

8
As instrumental as the ‘software - hardware’ distinction is descriptively, it might turn out to
be a methodological hindrance for future IT design if taken ontologically.
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 501
mization of nature and technology, scientific insights into the wonders of complex
systems proliferated. From being pictured as slow, vulgar, dirty and beastly nature
suddenly became appreciated for speed, immunity, efficiency and economy. Capaci-
ties normally identified with hi-tech but now borrowed from nature.
In the following sections I will provide a brief reconstructions of the main scientific
and technological tendencies of the 20th century that led to the present interest for
living systems. The reconstruction is divided into historical themes, which should not
be taken rigidly. Historical overviews are by nature selective and begin in media res.
Besides I have no wish to undertake a comprehensive review of recent scientific his-
tory. To simplify and clarify matters cybernetics will be our protagonist; both as our
point of departure and as the undercurrent of the genealogy of biomimetics.
3.1 Cybernetics
Like all other cultural manifestations, cybernetics did not arise in a vacuum but grew
from scientific development embedded in a historical context [14]. Important scien-
tific discoveries and accumulated knowledge fertilized the ground for the converging
ideas for research in missile-guiding systems, animal behavior, sociology, neurology
and computation around WWII. Cybernetics was founded by a more or less coherent
movement of various leading scientists and engineers who met from 1946 to 1953 at
the so-called Macy conferences (after the Josiah Macy Jr. foundation, which funded
the meetings). Moved by discoveries made during the previous decades within
mathematics, physics, biology and chemistry the participants set out to explore proc-
esses in complex systems. In fact the very idioms ‘complexity’ and ‘system’, together
with ‘feedback’, ‘circular causality’ and ‘information’, were created by the cyberneti-
cians in their pursuit of fundamental models for dynamic systems. One of the primary
motivations – at least among central figures such as Norbert Wiener, Arturo Rosen-
blueth and Warren McCulloch – was the similarity between certain mechanical and
biological processes. In particular, animal purpose guided behavior, or ‘teleology of
organisms’ as they put it themselves, became the model for self-adjusting machines.
Purposeful behavior seemed basically to consist in adjusting behavior by recursively
computing the difference between present state and the reference state. This simple
feedback-cum-computation model of an almost supernatural phenomenon as teleol-
ogy promised further solutions of hard philosophical riddles of the mind.
For early cybernetics the computation taken to be the substrate of teleology was,
unlike its successors classical AI (GOFAI) and cognitive science, conceived of as
strictly mechanical [15]. The important difference is that the computational paradigm
for intelligence and semantics represented by GOFAI and cognitive science took
computation to consist of rule-guided manipulation of symbolic entities already en-
dowed with meaning or gaining meaning by the syntactical operations themselves.
Cybernetics did not operate with such ‘semantic computation’. Intentionality and
semantics was instead taken to be, at best, higher-level phenomena arising from com-
putational processes. In contrast GOFAI and cognitivism seemed to sneak in seman-
tics through the backdoor via syntactical slight of hand by projecting higher-level
characteristics into semantic, intentional or normative building blocks with the gene-
sis indefinitely far back in evolutionary history. The cybernetic idea, which is being
502 M.H. Sørensen
echoed today in modern cognitive science, is that if seemingly irreducible phenomena
relating to the mind cannot be explained (either as real phenomena or ‘folk psychol-
ogy’) as the result of processes deprived of such qualities, the same phenomena can-
not logically be explained by evolution. Then they must somehow be put into each
organism (by God or miracle). A static understanding of mental capacities dictates
that either they have always existed beside the physical system (a scientifically un-
satisfying dualism) or they are just illusions (phenomenologically inadequate). So like
in Hegelian dialectics cybernetics gives the modality of such elusive phenomena the
position between pure being and non-being namely becoming. Intelligence is ex-
plained as a process capacity [16, 17, 18].
Worth noting is the cybernetic intuition of what we today call emergence of global
systematic capacities (e.g. purpose guided behavior, adaptivity and self-maintenance).
Through a full-blooded and honest adherence to a mechanical conception of algo-
rithms early cybernetics acknowledged the need for a dynamic conception of the
mind.
9
Properties arising from complexity such as non-linear dynamics and self-
organization formed the core of this emergentist explanation of purposeful behavior
arising from mechanical processes. Mechanical in the sense of formally describable
processes giving rise to self-regulating behavior without any entelechy or teleology in
the standard sense. Cybernetics thus placed itself between the reductionism of tradi-
tional physicalism and the transcendentalism of (some) philosophical approaches, by
stressing the scientific importance of mathematical models while making room for the
autonomy of emergent levels of description. This turn represented an early version of
a slowly propagating undercurrent of science towards interest in processes and dy-
namics instead of traditional predominantly atomistic and structural scientific models.
In more grandiose terms cybernetics manifested a general movement from “substance
metaphysics” to “process metaphysics” of 20th century science [16].
Over the years cybernetics began stressing the contribution of the system itself in
processing input to behavior. This ‘second wave’ of cybernetics reached its zenith
with [19], which argued for the ‘constructivism’ of systems orchestrating their inner
organization in response to interactions with their environment. Whereas early cyber-
netics sometimes resembled behaviorism, its dominant precursor, by the early 1970s it
had reached the opposite pole with theories of self-organization and ‘autopoiesis’ of
Humberto Maturana and Francisco Varela. These theories opened up the ‘black box’
of cognition to an extent that seemed to occlude everything external. From being a
general philosophy of complex systems cybernetics had come to stress epistemology.
Yet, in the true spirit of cybernetics such self-organizational epistemological charac-
teristics was still viewed as integrated aspects of the overall self-maintenance of
adaptive systems [20].
3.2 The Ratio Club
On the other side of the Atlantic, early cybernetics had a stepsister, heavily inter-
weaved with the Americans, but prominent enough to deserve separate treatment here.

9
It must be noted that the cyberneticians did not form a homogenous group advancing a single
coherent theory. For differences in conceptions of ‘mechanics’ see [15].
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 503
In London a group including Alan Turing, W. Grey Walter and Ross Ashby, named
The Ratio Club, met to discuss scientific and engineering questions also partly moti-
vated by work on war machinery. In their interaction with the American cybernetics,
The Ratio Club members came to play an important role as founders of the second
wave cybernetic and the increased focus on self-organization.
Although Ross Ashby only participated in a single Macy meeting (the 9th) he was
very influential on cybernetics. At his only appearance Ashby presented his ideas on
the ‘homeostat’. The homeostat is an (abstract) adaptive automata, which keeps es-
sential parameters at equilibrium by interaction with its surroundings while letting
other parameters fluctuate when required. The homeostat provided a formalization of
the self-maintaining principles of complex adaptive systems, as manifested by life;
the dynamic balancing between ‘freezing’ into total order and disintegrating into pure
chaos. The homeostat depicted complex adaptive systems as inherently “poised at the
edge of chaos” as it was later phrased by one of Ashby’s famous pupils Stuart Kauf-
mann. Ashby was thus responsible for placing complexity and self-organization at the
heart of principia cybernetica and its subsequent metamorphism into its second wave.
The neurologist and engineering genius Grey Walters is an excellent example of the
spirit of the time and the renaissance-like feats it fostered. Walter conducted pioneer-
ing research in neurology (including inventing the EEG) and his work on defense
systems resulted in the radar-display still common in marine and aviation control
today. But his explicitly biomimetic work on the two autonomous robot ‘tortoises’,
Elmer and Elsie stands out as the most visionary. On a purely mechanical architecture
the tortoises were designed with bio-analogue feedback guided ‘needs’, which gave
rise to seemingly ‘motivated’ and spontaneous behavior. Endowed with simple photo
and tactile sensors they had photo-tactic capabilities enabling them to locate their
lightened hut when ‘hungry’, i.e. for their batteries to be recharged and to leave it
again when batteries where charged and the light, due to a switching mechanism,
became aversive. Walter’s work remains an astounding study in ingenious robotics
and a milestone for biomimetic history as an early example of the prospects for im-
plementing even idealized biological principles. It is worth quoting Walter J. Free-
man’s praise of Walter’s work at length:
The significance of Walter's achievements can be understood by recognizing that
these complex adaptive behaviors came not from a large number of parts, but from a
small number ingeniously interconnected. These devices were autodidacts. They
learned by trial and error from their own actions and mistakes. They remembered
without internal images and representations. They judged without numbers, and rec-
ognized objects without templates. They were the first free-ranging, autonomous
robots capable of exploring their limited worlds. They are still the best of breed...[]...
His devices were the forerunners of currently emerging machines that are governed
by nonlinear dynamics, and that rely on controlled instability, noise, and chaos to
achieve continually updated adaptation to ever-changing and unpredictable worlds.
He can well be said to have been the Godfather of truly intelligent machines [21].
504 M.H. Sørensen
3.3 Evolutionary Computing
If cybernetics soon became neglected by its offspring, computer science, GOFAI and
cognitive science, its current resurrection was nevertheless prepared by developments
within the family itself. In 1960s and 1970s the computer scientist John Holland de-
veloped Genetic Algorithms (GA) as a general way of creating software solutions by
evolutionary adaptive processes. Sidestepping questions of intentionality, the mind
and other philosophical issues, Holland showed how algorithms mimicking evolu-
tionary processes were very potent in searching, problem solving and coping with
uncertainty and change. In contrast to other post-cybernetic biologically inspired
approaches to computational engineering, Holland was not interested in optimization
or solutions to specific engineering problems per se. He wanted to model adaptation
formally leading to a general understanding [22]. The GA model Holland presented in
[23] was algorithms organized in population of competing chromosomes coding for
different solutions to a given problem. Chromosomes coding for successful solutions
where reproduced by the exchange of genetic material (via crossover) and random
mutation. Due to the speed of computers evolution of solutions over many generations
allowed for fast and reliable almost automated software programming.
What Holland’s work made clear was that faced with unknown tasks, changing con-
ditions or other uncertainties, populations of candidates undergoing heritable variance
and a good selection heuristics is a potent strategy. Holland’s work demonstrated how
nature’s principles for problem solving were - at least in some domains - reproducible
and generally applicable. When exposed to the prisoners dilemma, the traveling
salesman or other non-trivial computational tests, GA’s proved to be reliable and
remarkably fast in finding solid solutions and ‘rational’ strategies. The results were
very convincing and seemed to provide a powerful tool for dynamic automated prob-
lem solution. So although of great theoretical strength, it was the pragmatic value of
GA’s that paved the way to the prominent status that evolutionary techniques enjoy
today. Engineering focused computer scientists, normally not interested in other
fields, suddenly realized the value of theoretical cross-fertilization. GA’s bestowed
genuine adaptive dynamics upon standard architectures and self-organizing technol-
ogy took a significant step forward.
3.4 Neural Networks
Parallel and more architecturally focused developments within computer science were
to place cybernetics on the agenda more directly. The rise of neural network theory,
or connectionism as it was soon named, in the 1980s was a reemergence of cybernetic
ideas. In their seminal paper from 1943 McCulloch and Walter Pitts described the
brain as a network of simple neuronal units each firing according to the net-sum of
inhibitory and excitatory inputs and the firing potential of the neuron [24]. Many
heavily interconnected neurons facilitate interesting higher-level computation by
simply firing or not due to the non-linearity of their collective behavior. Such digital
dynamics resembled bivalent logic, thought to be the essence of reasoning, and the
analogy to human thinking was clear. Their ideas soon gave birth to a new architec-
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 505
ture for computation called artificial neuronal networks. However, for various techni-
cal and historic reasons, the von Neumann architecture remained dominant.
In the early eighties the connectionist approach took a leap forward aided by im-
proved hardware and increased scientific attention. The results were promising
enough to attract attention from mainstream computer science and AI. The primary
attraction of neural networks laid in their capacity to model learning and adaptation,
which GOFAI was not able to provide. In addition, with the renewed interest in biol-
ogy the model gained favor by its greater biological correspondence. So although
neural networks were highly idealized and only partly flexible, as they must be
trained anew for every new task, they still had a biological flavor long missing.
Neural nets remain burdened by architectural hurdles today as such networks take
vast amounts of interconnections to be of practical interest. So far neural networks are
mostly simulated on conventional platforms. Yet, the jury is still out as to whether
neural nets hold the key to future dynamic IT. Given the overall qualities of neural
nets demonstrated thus far, it is worth developing methods for creating large-scale
neural nets with complex architectures. This has the potential of providing a serious
alternative to conventional computers. Again software-hardware integration seems to
take center stage, because neural net architectures may provide the means to revolu-
tionize this aspect of computing.
3.5 New AI and Evolutionary Robotics
In response to very poor result within GOFAI, especially if compared with the self-
confidence displayed at the outset, roboticists started suggesting new ways of con-
ceiving intelligence in the late 1980s [25]. In the place of symbolic computation, low-
level motor capacities were put forward as the basis of cognition. AI and robotics
became heavily biologically inspired and turned their interest from human level rea-
soning and language to simple animals and their embodied negotiation of the envi-
ronment. Intelligence was no longer taken as an isolated capacity by a discrete system
but a descriptive term for the interactions between an autonomous system and its
environment. Biological notions such as ‘development, emergence and functional
coupling became in favor in New AI and robotics. Thus roboticists started imple-
menting ideas from evolutionary computing to develop control mechanisms, and even
morphology and physiology, for both virtual and physical agents. From being mar-
ginal ideas, notions of decentralization, bottom up organization and not least em-
bodiment by the mid nineties had become dominant concepts in robotics, New AI and
cognitive science and buzzwords within most other academic disciplines involving
cognition.
3.6 Artificial Life
Simultaneously with the (re-) emergent focus on embodiment and interactive proc-
esses in New AI, cognitive science and robotics another adjacent field was forming.
Building on the theoretical foundations of molecular biology and computer science
researchers started studying (some hoped to create) life in silico or Artificial Life
506 M.H. Sørensen
(ALife) as Chistopher Langton baptized the field in 1987 [26]. The marriage of mo-
lecular biology and computer science was straight forward due to an underlying func-
tionalism a la GOFAI, regarding life as consisting in computational processes on
information stored in digital DNA. Thus whether the substrate of the life investigated
was carbon or silicon was a somewhat irrelevant empirical matter, at least for so
called ‘strong ALife’. By not arbitrarily confining focus to the carbon-based systems
we happen to know, ALife could contribute substantially to a general study of life -
“life as it could be” [26].
Even if the metaphysics of this self-claimed pioneer field represents the zenith of a
reductionistic computationalism (as represented in physics by Steven Wolfram’s
radical algorithmic theory of the universe), a lot of valuable work relating to evolu-
tionary capacities of software has been done. With its refusal to limit the scope to
things we are familiar with, ALife provides inspiration for biomimetics also some-
times on the verge of science fiction.
3.7 Swarm Intelligence
In close relation to the work within New AI, robotics and ALife, emergentist models
grew from ethology and biology as well. By studying the heavily collaborative proc-
esses of social insects such as bees, wasps, ants, and termites, valuable knowledge
about the rise of productive global functions of swarms of individuals was gained.
Similarly to neuronal networks, swarms of insects carrying out relatively simple tasks
proved capable of rather complex feats. By exploiting strikingly simple organizational
methods, social insects were shown to behave as a unified intelligent super-organism.
For example ants capable of foraging with mathematically optimal distribution and
finding shortest paths to food sources or termites practicing advanced agriculture and
building architecturally impressive nests [13].
Social insects widely use indirect communication in their grand collective labor.
Stigmergy (from the Greek ‘stigma’ = sting and ‘ergon’ = work) is a good example of
indirect communication by (re-) configurating of the environment, which evokes a
specific subsequent behavior in an animal. Stigmergy refers to a triggering effect
when e.g. a hole in a wall evokes an ant to put in the missing pellet of dirt. In this way
the organization of building is distributed structurally into the environment and arises
self-organizationally ad hoc.
An example of stigmergy is the chemical organization by pheromone trails. By leav-
ing trails of evaporating pheromones ants have a dynamic communication system
allowing for efficient organization. The principle is very simple, just as reliable and
consists in pure ‘mechanics’. The trail used by the ant first returning from foraging is
likely to have the most powerful scent because of the overlaying of the outgoing and
returning trails. Through the chemo-tactic navigation of other ants following the trail,
it becomes incrementally enhanced. Soon all alternatives - the longer routes - are
excluded leaving only one short ‘highway’. Such a reliable, flexible and cost saving
way of communicating is very instructive for the design of embedded IT systems.
Research in swarm phenomena (e.g. as presented by [13], which is specifically fo-
cused on implementation) provides interesting new ways of organizing complex tech-
nological systems by letting the order rise bottom-up from the units themselves. What
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 507
is particularly interesting about swarm phenomena is the possibility of getting corpo-
rative behavior relatively cheap and with simple individual constituents. The advance
of swarm organization stems from the fact that the difficulty in designing systems is
growing exponentially with infrastructural complexity. Deploying a number of simple
devices for the same task remedies this by decreasing complexity immensely. Besides
swarms (like networks) malfunction much more gracefully because of parallelism and
distribution and are generally more robust. Since centralization is getting decreasingly
opportune, let alone possible, reliable ways of facilitating global functions is impera-
tive. Swarming seems a promising method.
Like neuronal networks and evolutionary computing swarm intelligence is already
widely in use. For example in network switches the model of pheromone trails has
been mimicked with great success to handle massive information distribution by op-
timizing bandwidth usage and preventing bottlenecks.
3.8 Biologically Inspired IT Systems: Autonomic Computing
Several large scale initiatives of systematic applied biological inspiration have been
launched the last couple of years from huge players on the commercial IT field. The
Autonomic Computing project from IBM is a good example of how biologically
inspired approaches to IT design are starting to dominate broadly in IT design [11].
10
The project addresses issues related to ever growing IT systems and the urgent need
for creating self-maintaining and self-organizing systems. The goal is to create IT
systems that calmly and autonomously take care of maintaining themselves and pro-
viding assistance without detailed specifications of all subroutines and solutions. Just
like the autonomic systems of higher organisms works in the background leaving
more mental energy to interesting and creative tasks, autonomic computing is an
initiative to make the time spend with IT meaningful.
The architecture suggested consist of multiple semi-autonomous devices adapting to
changing circumstances and needs by following individual (high-level) objectives
provided by the programmers. Thus optimized functionality and infrastructure
emerges (evolves and develop) by the interaction between users and the systems and
among the devices themselves. Though the Autonomic Computing project mainly
regards infrastructure issues such evolutionary dynamics are equally important for
providing improved assistance at the interface level
[2, 3, 4].
4 The Viability of Biomimetics
So far the genealogy of biomimetics reconstructed seems a glorious march toward
total victory, but let us pause before this happy ending sinks in too deeply. First of all,
the picture appears optimistic because the previous account focused on the genealogy
of contemporary biomimetics and deliberately left out most conflicting nuances. Sec-
ond, because there is no ‘end’ to history, but only continuous flux, history will un-

10
http://www.research.ibm.com/autonomic/
508 M.H. Sørensen
doubtedly move on after a biomimetic heyday. So let us examine the scientific foun-
dation of biomimetics in order to equip it for the productive years to come.
4.1 Biomimetic Considerations: Constraints and Freedom
Generally biomimetics should remain pragmatic and focused with the due self-
constrain of a methodology. Naturalism within philosophy and psychology, (the neo-
romantic) environmentalism and other tendencies which biomimetic has bloomed
together with are by nature ideological. But even if biomimetics does ride on an
ideological wave, it is itself only transiently normative as a method to obtain better
technology. So ideology should not be its fuel. If biomimetics builds on the slippery
foundation of a trend it will most likely vanish together with the trend. Hype, however
nice when one is the object of it, must be strictly avoided.
Another concern is the argumentum ad veracundiam fallacy; referring to an improper
authority. Biology owes a lot of the current attention to the fact that genetics not only
has become a hot scientific topic but gained widespread cultural interest as ‘the secret
code of life’. The resulting ‘gene chauvinism’ that has dominated most biology the
last fifty years, i.e. the intense focus on DNA as the structural blueprint of all life,
provided a lot of spotlight - but often for the wrong reasons. The notion of DNA being
a blueprint or program for the ontogenesis of the organism, as expressed by daily
stories in the news about ‘scientist who have isolated the gene for X and Y’, has
turned out to be overly simplistic [27]. Development is far more complex and non-
linearly entangled with the actual environment of the organism. Most developmental
biologists are turning towards a system-process approach regarding the functions of
genes where genes are not the “selfish” agent of development but merely one, albeit
important, resource for the self-organizing system [28].
Biomimetics must avoid falling prey for the gene chauvinistic folk biology. The con-
cern is to get seduced into wedlock with the digital architecture by the mutual reso-
nance of molecular biology and computational theory. Even if basing design ideas on
conventional digital architectures is necessary as a pragmatic beginning, biomimetics
must be careful not to get theoretically tangled up with such linear and/or atomistic
approaches. By sticking to an outdated genetic view and merely applying convenient
but shallow analogies biomimetics risks getting cut off from alternative paths
11
to new
IT. Biomimetics should rest on qualified insight into biology if it nurtures ambitions
beyond the metaphorical buzz.
On the other hand biomimetics should not be blindly committed to biological fidelity
[29]. First of all because of the unresolved status of fundamental issues within biology
itself. To avoid getting sucked into a black hole of biological debate biomimetics
needs to practice a cautious pragmatism regarding its biological foundations. Sec-
ondly as a design methodology it is committed to take full and creative advantage of
the freedom from natural constraints. Biological evolution is heavily path dependant,
opportunistically tinkering and myopically seeking merely local optima in the fitness

11
Cultural evolution is path dependent - as is its biological counterpart - but by contributing to
new conceptual ‘scaffoldings’ we can influence cultural change and enhance the creativity of
future IT design.
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 509
landscape. Besides biological evolution is mostly slow compared to other types of
developments (e.g. cultural and technological). Hence evolution might be the most
efficient general adaptive strategy but it can be improved in a range of specific cases
by intentional guidance. Biomimetics should strategically capitalize on the most pow-
erful aspects of biological processes and human design respectively.
4.2 Dynamic IT
I hope to have made plausible the general popularity of biology as a consequence of a
general scientific movement towards theories of systems, complexity and processes.
This development not only describes the historical genesis of biomimetics but - so I
will claim - its raison d’être. The quest for biomimetics, like its predecessors, is em-
barking on the challenge of creating dynamic IT. This challenge brings about a lot of
changes some of which go to the bottom of our conventional understanding of design.
In specific we will need to address a lot of global and long-term issues inherent in
dynamic complex systems. And in spite of the temptation to generalize from a spe-
cific success instances biomimetics must keep in mind that ‘solutions’ in nature only
seem finished in a limited perspective. Only the meta-solution of adaptive dynamics is
universal. Even though copying structural and material configurations will become
increasingly important, it will be for their dynamic capacities and not because of
solidity, flexibility or other physical characteristics. The intrinsic quality and power of
living processes lies in their dynamic capacities. Adaptive processes are continuous
‘negotiations’ and cannot be conceptualized as solutions. Copying a specific design
and implement it in a different setting risks missing the point unless the pragmatic
value is clear. What should be the cardinal virtue of biomimetic design is translating
the self-organizing capacities of natural evolutionary dynamics into design to facili-
tate ongoing adaptation and self-maintenance in IT devices [2, 3, 4, 29].
4.2.1 Dynamic Remedies and Dynamic Maladies
In general, biomimetics will address new types of questions arising with pervasive
dynamic systems. The characteristics that give such systems tremendously powerful
and interesting functionalities also bring along new types of problems: Dynamic sys-
tems are vulnerable to dynamic failures. To reverse a famous quote from Martin Hei-
degger’s writing on technology: ‘But where the saving power is, grows danger also’.
So in the euphoria of creating new types of technology, biomimetic designers must
not forget to consider the long term and large scale consequences of such dynamic
architectures [29, 30].
In general resilience, oscillation and propagation phenomena will be important issues
for the design of dynamic systems. On the positive side to create mutually supportive
and robust systems. On the negative to avoid destructive oscillatory or cascading
effects. From cybernetics we have learned the importance of dampening feedback
functions to avoid chaotic dynamics, and there will be a range of other short- and
long-term dynamic phenomena to consider. Dynamic systems are intrinsically path
dependent and historic and accordingly biomimetic design will have a strong temporal
dimension new to most conventional IT design.
510 M.H. Sørensen
In relation to a general study of resilience and robustness in IT a way of designing
‘immune systems’ dynamically fighting malicious code will be central. Writing in the
aftermath of another massive blackout in the US a focus on epidemic effects of large
scale and massively interconnected IT systems seems imperative. If we succeed in
creating immune systems for IT new issues will emerge. Such immune systems might
globally malfunction and give us computer AIDS or even autoimmune defects.
5 Closing Remarks
The living nature is in vogue these years and naturally state of the art technology
design is influenced by the trend. However trends come and go as fleeting perspec-
tives on the world and they do not provide suitable foundations for scientific theories.
For biomimetics to stand its best chance of contributing significantly to future IT
design it must have a clear understanding of its premises and goals. This paper has
tried to prepare the ground and provide some of the stones for a better foundation.
I have pointed to the general scientific shift during the 20th century, first manifested
by cybernetics and later disseminating to more fields, towards interest in complexity,
organization and processes as the main reason for the massive interest for living sys-
tems in science, technology and design.
Many factors contributed to this development, the relevant of which this paper has
identified. Some of the circumstances that lead to the rise of biomimetics, such as
gene chauvinism and environmentalism ought not form basis for a future biomimetic
design of IT if the approach is to be more than a historical curiosity. However the
scientific reasons for the development towards interests in the organization and dy-
namics of complex systems do offer valuable guidance for the design challenges
ahead. Thus factors stemming from these two different sources should be identified
and kept separate in order to avoid a lot of futile lip service.
I have argued that, in analogy with its genealogy, the proper focus for biomimetic IT
design is matters of dynamics in complex systems. Mimicking finished designs of
nature might indeed be productive for some tasks, but it should not be the focus for
biomimetics. The challenge of designing highly dynamic IT calls for models of adap-
tive self-organizing systems capable of managing on the fly rather than fixed solu-
tions however ingenious. A dynamic approach does not only remedy our limited ca-
pacities for predicting future needs and behaviors in complex systems, but is the most
adequate response to an inherently fluctuating reality.
Biomimetics is not likely to become, or even if so to remain, the dominant approach
to IT design. It is, after all, part of a trend and trends inherently change. However a
general dynamic approach to design is likely to dominate more permanently as we
learn to master self-assembling, self-organizing, and reconfigurable structures. Bio-
mimetics might fade with scientific progress and the likely unveiling of more univer-
sal characteristics ‘behind’ living processes, leaving biology an arbitrary realm of
reality to model. Until then our insights into the self-organizing processes of nature
nonetheless offer invaluable heuristics for designing dynamic IT.
The Genealogy of Biomimetics: Half a Century’s Quest for Dynamic IT 511
References
1. Benyus, J. M.: Biomimicry. Innovation Inspired by Nature. HarperCollins. New York
2002
2. Sørensen, M. H.: Assistive Ecologies. Biomimetic Design of Ambient Intelligence. Con-
ference paper Intelligent Agent Technologies 2003
3. Sørensen, M. H.: It’s A Jungle Out There. Toward Design Heuristics for Ambient Intelli-
gence. Conference paper Computer, Communication and Control Technologies 2003
4. Sørensen, M. H.: Design Symbiosis. Interactive Design Principles for Dynamic Technol-
ogy. Forthcoming
5. Kirsh, D.: Changing the rules: architecture and the new millennium. In Convergence. 2001
6. Gatherer, D.: The Memetics of Design. In Bentley, P. J. Ed.: Evolutionary Design by
Computers. 1-79. Morgan Kaufmann, San Francisco 1999
7. Bickhard, M. H. & Campbell, D. T.: Variations in Variation and Selection: The Ubiquity
of the Variation-and-Selective-Retention Ratchet in Emergent Organizational Complex-
ity. Foundations of Science. In press.
8. Edelman, G. M.: Bright Air, Brilliant Fire. In the Matter of the Mind. Basic Books. New
York 1992
9. Bentley, P. J.: An Introduction to Evolutionary Design by Computers. In Bentley, P. J.
Ed.: Evolutionary Design by Computers. 1-79. Morgan Kaufmann San Francisco 1999
10. Dawkins, R.: The Extended Phenotype. Oxford University Press, Oxford 1999
11. Kephart, J. O. & Chess, D. M.: The Vision of Autonomic Computing. IIIE Computer 2003
12. Rheingold, H.: Smart mobs, the next social revolution. Perseus Publishing, Cambridge
2002
13. Bonabeau, E. et al.: Swarm Intelligence. From Natural to Artificial Systems. Oxford Uni-
versity Press. New York 1999
14. Heims, S. J.: Constructing a Social Science for Postwar America. The Cybernetics Group
1946-1953. MIT Press, Cambridge 1993
15. Dupuy, J.P.: The Mechanization of the Mind. On the Origins of Cognitive Science.
Translated by M. B. DeBevoise. Princeton University Press, Princeton 2000
16. Bickhard, M. H. Interactivism: A Manifesto. http://www.lehigh.edu/~mhb0/pubspage.html
17. Bickhard, M. H. & Terveen, L.: Foundational Issues in Artificial Intelligence and Cogni-
tive Science - Impasse and Solution. Amsterdam: Elsevier Scientific. 1995
18. Christensen, W. D. & Hooker, C. A.: An Interactivist-Constructivist Approach to Intelli-
gence: Self-Directed Anticipative Learning. Philosophical Psychology, 13(1), 5-45. 2000
19. Lettvin, J. Y et al.: What the Frog's Eye Tells the Frog's Brain Proc. IRE 47 (1959) 1940-
1951, reprinted in Warren S. McCulloch, Embodiments of Mind, MIT Press 1965
20. Maturana, H. R. & Varela, F. J.: Autopoiesis and Cognition: The Realization of the Liv-
ing. D. Reidel, Dordrecht 1980
21. Freeman, W. J.: W. Grey Walter. In Lynn Nadel (ed): Encyclopedia of Cognitive Science.
Nature Publishing Group, 2001
22. Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press, Cambridge 2001
23. Holland, J. H.:Adaptation in Natural and Artificial Systems. MIT Press, Cam-
bridge 2001
24. McCulloch, W. S. and Pitts, W. H.: A logical calculus of the ideas immanent in nervous
activity. Bulletin of Mathematical Biophysics, 5: 115-133. 1943
25. Brooks, R. A.: Intelligence Without Representation. Artificial Intelligence Journal (47)
1991
26. Langton, C. G.: Artificial life. In Boden, M.A.: The Philosophy of Artificial Life. Oxford
University Press 1996
27. Keller, E. F.: The Century of the Gene. Harvard University Press, Boston 2002
512 M.H. Sørensen
28. Oyama, S. et al.: Cycles of Contingencies. Developmental Systems and Evolution. MIT
Press, Cambridge 2001
29. Sørensen, M. H.: The Viability of Biomimetics. Biology vs. Technology. Forthcoming
30. Barabási, A.L.: Linked. The New Science of Networks. Perseus Publishers, Cambridge
2002