Cellular Automata, Modeling, and Computation
Anouk Barberousse, IHPST (CNRS – Université Paris 1 – Ecole normale supérieure), Paris
Sara Franceschelli, Université de Lyon, ENSLSH and REHSEIS, Paris
Cyrille Imbert, IHPST and Université de Caen
DRAFT – PLEASE DO NOT QUOTE WITHOUT PERMISSION OF THE AUTHORS
(about 9800 words)
1. Introduction
Cellular Automata (CA) based simulations are widely used in a great variety of domains, from
statistical physics to social science. They allow for spectacular displays and numerical predictions. Are they for
all that a revolutionary modeling tool, allowing for “direct simulation” (Morgan and Morrison 1999, 29), or for
the simulation of “the phenomenon itself” (Fox Keller 2003)? Or are they merely models of “ a
phenomenological nature rather than of a fundamental one” (Rohrlich 1990, section 10)? How do they compare
to other modeling techniques?
In order to answer these questions, we present a systematic exploration of CA’s various uses. We first
describe their most general characteristics (section 2); we then present several examples of what is done with CA
nowadays (section 3); we draw some philosophical implications from these examples (section 4) and finally
draw some lessons about CA, modeling and computation.
2. What are cellular automata?
CA are employed in a variety of modeling contexts. As for other mathematical modeling tools, their
modeling capacities are grounded in their mathematical properties; however, discussions about CA tend not to
clearly distinguish between their purely mathematical properties and their representational and computational
potential when they are used in modeling situations. However, in order to understand what CA contribute to the
practice of modeling and simulation, CA have first to be described for themselves.
In this section, we describe the general properties of CA: first the mathematical, logical and
computational ones, then the striking visual displays they allow for. We finally discuss the questions raised by
the implementation of these formal structures.
2.1 The mathematics of CA
A cellular automaton consists in a dynamical rule which updates synchronously a discrete variable,
defined on the sites (cells) of a
d
dimensional lattice. The values
s
I
of the observable are taken in a finite set. The
same (local) transition rule applies to all cells uniformly and simultaneously. CA rules can be expressed in the
language used to study dynamical systems (Badii and Politi 1997, 49). An assignment of states to all cells is a
configuration
. A CA is
invertible
(or
reversible
)
if every configuration has exactly one predecessor.
Different types of neighborhood can be defined. In a
von Neumann
neighborhood, cells to the north,
south, east and west of the centre cell are defined as neighbors. The centre cell has 5 neighbors in total. A
Moore
neighborhood also includes diagonal cells to the northeast, northwest, southeast and southwest.
From the mathematical point of view, CA are an important class of objects that can be, and are,
explored for themselves as well as applied to the study of natural phenomena. They behave differently from
smooth dynamical systems, since the existence of discontinuities in arbitrarily small intervals prevents any
linearization of the dynamics and, in turn, a simple investigation of the stability properties of the trajectories
(Badii and Politi, 1997, 4950). Nonetheless, some of the tools used to describe and classify discretetime maps
can be extended to CA; in particular the systematic study of recurrent configurations, because they characterize
the asymptotic time evolution.
Noninvertible CA are of special interest for the study of complex systems. When acting on a seed
consisting of a uniform background of equal symbols except a few (possibly one), some noninvertible CA are
able to yield interesting limit patterns, neither periodic nor completely disordered.
Attempts to classify the variety of CA as discrete dynamical systems have been made. Wolfram has
proposed to distinguish between four classes according to CA’s behaviors; however, this “phenomenological”
classification suffers from serious drawbacks. In particular, Culik and Yu (1988) have shown that it cannot be
decided whether all finite configurations of a given CA become quiescent and therefore to which class it
belongs. Other classifications have been proposed. For example, Langton (1990) suggests that CA rules can be
parameterized by his
λ
parameter, which measures the fraction of nonquiescence rule table entries. Dubacq et
al. (2001) as well as Goldenfeld and Israeli (2006) have also proposed a parameterization of CA rules as
measured by their Kolmogorov complexity.
Traditional
tools
of
statistical
physics
have
also
been
used
to
study
CA.
For
example,
ChapmanEnskog
expansions
have
been
used
to
derive
CA
macroscopic
laws
or
dynamics
(cf.
section
3.5).
Renormalization
techniques
have
also
been
used
to
characterize
their
macrodynamics.
The
resulting
map
is
in
some
sense
an
analogue
of
the
familiar
renormalization
group
flow
diagrams
from
statistical
mechanics,
displaying
stable
fixed
points
(Goldenfeld
and
Israeli
2006).
Symbolic
dynamics,
as
well
as
the
study
of
Lyaponouv
exponents,
entropy
measures
and
fractal
dimensions
are
also
used
(for
a
review
of
all
this
techniques,
see
Badii
and
Politi
(1997,
54
55), Ilachinsky, 167225).
The
purely
mathematical
study
of
CA
by
various
means
is
thus
a
growing
field,
still
full
of
open
questions,
but
whose
results
are
already
impressive.
Here
is
Toffoli’s
and
Margolus’
bold
appraisal
of
the
importance
of
this
field:
CA
are
“abstract
dynamical
systems
that
play
a
role
in
discrete
mathematics
comparable
to
that
played
by
partial
differential
equations
in
the
mathematics
of
the
continuum”
(Toffoli
and
Margolus
1990,
229).
We
suspend
our
judgment
about
the
validity
of
this
claim
for
mathematics;
nevertheless,
we
explore
its implications for modeling in section 4.
As it is wellknown, CA are not only interesting objects from a mathematical point of view, but also
from a computerscientific one. A CA can indeed be seen as “an indefinitely extended network of trivially small,
identical, uniformly connected, and synchronously clocked digital computers” (Toffoli and Margolus 1990, 229).
The notion of computation involved in these sentences is a large one. At least two notions of computation are
currently used in papers about CA. According to the first one, “to compute” means “to be a universal computer
in the standard Turing machine sense.” A computer in this sense can simulate the architecture of any other
sequential machine. This notion of simulation is a purely computerscientific one. The second, less rigorous
notion of computation involves a larger notion of simulation: “to compute” means “for any given mathematical
situation, to find the minimum cellular space that can do a simulation of it”. We insist that these two notions,
albeit widely used, should not be mixed up and that only the first one can be rigorously defined.
Generally speaking, CA can be viewed as models of distributed dynamical systems (Toffoli 1984).
They thus appear as a paradigm of distributed computation. In addition, it can be noted that invertible CA can
perform reversible, namely information preserving, computing processes.
2.2 A striking format
When looking at Conway’s Game of Life (see section 3.2) or at a simulation of car traffic inspired by
Nagel’s and Shereckenberg’s models, or more simply at Wolfram’s bestseller
A New Kind of Science
(2002),
what is most striking is certainly not the mathematical properties of CA but rather the visual displays they
permit. From the mathematical point of view, the visualization capacities of CA are inessential; however, for the
human visual apparatus, they are of importance. When a CA based simulation is displayed on a computer screen,
the human eye is prone to easily interpret what is going on to the extent that some clues are given to the viewer
about what is represented and how. For example, CA based 2D simulations of forest fires are seemingly easy to
interpret: since we are already familiar with this type of simulation, it is hardly necessary to look at the caption to
understand how the represented fire propagates. The specific format of CA based 2D simulations (a 2D grid the
cells of which can take at least two possible colors) allows for easy to grasp representation conventions.
From
the
“in
principle”
point
of
view
criticized
in
Humphreys
(2004),
visual
displays
are
not
a
respectful
enough
object
of
reflection
for
philosophy
of
science.
However,
Humphreys
insists
on
the
growing
role
visualization
techniques
play
in
today
science.
These
techniques,
which
are
massively
used
in
all
types
of
computer
simulations,
could
not
be
dispensed
with
in
the
study
of
complex
phenomena.
Consequently,
from
the
“in
practice”
point
of
view,
the
visualization
capacities
of
CA
are
liable
to
the
same
analysis
Humphreys
(2004,
11114) proposes for classical computer simulations.
2.3 Questions of implementation
In the same way as the mathematical and computation properties of CA have to be carefully
distinguished from their visualization capacities, these properties should also be put apart from questions about
the implementation of these logical machines. CA can be implemented on digital (sequential) machines as well
as on parallel machines. In some cases the architecture of the machine can be the same as the architecture of the
CA itself, namely, the nearestneighborconnected cellular space can be imaged directly in hardware. In this case
the computation of a CA’s evolution is obviously much faster.
As Margolus (1999) emphasizes, there is a serious mismatch on conventional, digital machines between
hardware and algorithms when CA based simulations are run. When models are designed to fit CA
computational potential, but are implemented on digital machines, the advantage of the uniformity and locality
of CA systems is lost. Toffoli and Margolus designed and concretely built up several generations of CA
machines (see the narrative of these attempts in Margolus 1999, and the technical details in Margolus 1996);
however, “crystalline” hardwares are still in their infancy. Construction costs are still high.
3. Brief case studies
In this section, we briefly present some typical examples of what can be done with CA, which, taken
together, are representative of today’s research about CA  however we do not claim that they cover every aspect
of it. Our main goal is to show that the philosophical questions CA raise depend on the scientific research
programmes they are involved in and on what is expected from them in these programmes. The philosophical
discourse about CA should reflect this variety, as we show in section 4.
3.1 CA as universal computers
Our first example of the various ways CA are used nowadays has already been briefly described in
section 2.1: invertible CA can be used as parallel computers. They can perform elementary logical operations in
a variety of ways and are on a par with Turing and RAM machines as abstract universal models of computation.
According to Toffoli and Margolus (1990), invertible CA are even, these days, of significance comparable to the
introduction of Turing machines (and similar paradigms of effective computation) in the late 1930s. If Toffoli
and Margolus are right, the importance of CA for the entire domain of theoretical computer science cannot be
overestimated. They could allow for new, unexpected insights into the nature of computation.
Let us continue Toffoli’s and Margolus’ comparison between Turing machines and invertible CA.
Turing machines are the product of the tentative “to capture, in axiomatic form, those aspects of physical reality
that are most relevant to computation”. By comparison, CA are, according to Toffoli and Margolus, “more
expressive” than Turing machines, “in so far as they provide explicit means for modeling parallel computation
on spatiotemporal background”. Toffoli and Margolus have further explored the computing potential of CA. For
instance, in their (1990), they explore the following question: “How does one tell if a CA is invertible?”.
Theorems has been demonstrated about the existence of effective procedures for deciding whether certain types
of CA are invertible or not. They also examine how one can make a CA rule invertible. All these investigations
shed light on a way of being a universal computer that is completely different from being a Turing machine.
By way of conclusion about this first example, we attempt at clarifying (and criticizing) a maybe
surprising parlance of Margolus and Toffoli, who claim that invertible CA can “model” parallel computation.
Other possible choices to express the same idea are “simulate” or “emulate”. Toffoli’s and Margolus’ vocabulary
indicates that their notion of modeling has a very large extension. We insist that this sense of “modeling” is
completely different from the usual sense, as in the expression “modeling car traffic”, for instance. When CA are
said to “model” parallel simulation, no representation of any phenomenon is involved: there is no CA being
“about” parallel computation. We prefer to say that CA just
perform
parallel computation. In the remainder of
this paper, we will be careful to reserve the term “model” to representations of phenomena since we believe that
it helps keeping distinct problems clearly separated. Toffoli’s and Margolus’ way of speaking reveals that CA
are involved in a tricky set of questions about representation, computation, and mathematical formalism. We aim
at clarifying this nub of problems in presenting other examples in which CA play other roles.
3.2 “Life”
Conway’s “Game of Life” is perhaps the most widely known CA rule (Berlekamp et al. 1982). It
involves a 2D square lattice with one bit at each lattice site, and a noninvertible rule for updating each site that
depends on the total number of 1’s present in its eight nearest neighboring sites. If the total of its neighbors is 3,
a given site becomes a 1, if the total is 2, the site remains unchanged; in all other cases the site becomes a 0.
When the initial state is a random distribution of 0s and 1s, and when the Life dynamics is run at video
rates, we can observe a lively pattern of activity, with smallperiod oscillating structures, various stable
configurations and periodic patterns, gliders (if the initial configuration is wellchosen), etc. Some population of
cells can grow indefinitely. Some configurations move through the lattice and leave “wreckage” behind.
What is most striking with Life is that despite the simplicity of its rule, it can exhibit a wealth of
complex phenomena. As the most successful achievements of Artificial Life at its beginning, Life is an example
of a use of CA governed by the following question: What are the most basic mechanisms generating evolving
complex phenomena, comparable to those exhibited by the evolution of life on Earth? This investigation is
carried on by exploring the universes generated by the Life rule. The same type of investigation is also carried
on in the fields of genetic algorithms (Holland, 1975) and multiagent systems. It can be compared with the
attempts that lead last century mathematicians to look for the basic operations that are needed to compute all
functions that were currently computed by hand; this gave birth to what is now known as Turing machines,
lambda calculus or recursive function theory.
3.3 The Billiard Ball Model Cellular Automaton
The Billiard Ball Model Cellular Automaton (BBMCA) is a purely digital model of computation which
is closely related to a continuous model of computation, Fredkin’s Billiard Ball Model or BBM (Fredkin and
Toffoli 1982). The BBM is a continuous dynamical system whose initial conditions are suitably restricted and
whose evolution is observed at regularly spaced time intervals only. Due to these restrictions, it can compute
logical operations. The BBMCA is a CA whose behavior is the same as the original BBM: it is also a universal
computer.
The BBM is a 2D gas of identical hard spheres to which the following convention is associated: if the
center of a sphere is at a given point in space at a given point in time, we say here is a 1 there, otherwise there is
a 0. The1s can move from place to place. The number of 1s is constant. Here is the link between hard spheres
and logic: every place where a collision might occur is viewed as a Boolean logical operation. This implies that
the angle and timing of the collisions and the relative speeds of the balls have to be precisely controlled.
Moreover, mirrors are used in order to route and delay balls (that is, signals) as required in order to perform
logical computation.
When a BBM is only observed at integer timesteps, it appears as a squared lattice of points, each of
which may be black or white (or assigned a 0 or a 1), evolving according to a local rule. The construction of a
CA having the same behavior as the BBM thus looks like an easy task. As Margolus (1984) emphasizes, it is not
so easy, since a direct translation would require a complicated set up (with 6 states per cell, for instance).
However, BBMCA do exist and are universal computers in the same way as BBM are. The trick is to design two
ways to group cells into blocks and apply the rule alternatively to the even and odd 2
×
2 blockings (Margolus
1984, 8889; 1999, 16). To put it another way, the balls in the CA are represented by spatially separated
pairs
of
particles, one following the other—the leading edge of the ball followed by the trailing edge. (A particle in a
BBMCA is just a block of 4 cells one of which is assigned a one.) Figure 1 represents a BBMCA collision (see
legend for details).
Figure 1. Successive snapshots of a small area where a collision is happening. In the first image, the solidblocks
are about to be updated. The blocking alternates in successive images. (Margolus 1999, figure 1.9)
The BBMCA is interesting neither as a computer
per se
nor as a classical mechanical model of collisions
between hard spheres. However, as Margolus (1999) emphasizes, it is “a wonderful bridge between the tools and
concepts of continuum mechanics, and the world of exact discrete information dynamics”.
It can be used in
investigations about connections between physics and computation. Moreover, it illustrates how CA can
contribute to research in fields like the physic of information (Zurek 1984), for example to questions about
entropy and reversibility of computational processes.
3.4 Exploration of the Ising model
Whereas Life does not follow from any mathematical representation of the evolution of life on Earth,
but is rather designed to investigate its most basic features, other CA have been designed to investigate
mathematical constructs, especially those involving a regular lattice structure and local dynamic rules, among
them the
Ising model
, as well as models of percolation, nucleation, condensation, etc. (Vichniac 1984, 101). The
Ising model surely has a physical origin; however, the purely mathematical investigations of its properties has
developed of its own, and CA allow for interesting (albeit negative) on this road.
A
strong
motivation
a
CA
based
investigation
of
the
Ising
model
is
to
try
and
save
computational
resources
by
using
a
computational
architecture
with
local
updates
rules.
The
Ising
models
involves
a
lattice
each
site
of
which
is
associated
a
spin
variable
s
i
,
which
can
take
two
values,
+1
or
1.
The
energy
H
of
a
given
configuration
S
of
spins
is
made
up
of
two
parts
:
(1)
the
contribution
due
to
the
interaction
with
an
external
field
M
,
equal
to
H
S_M
=

M
_
i
_
i
;
(2)
the
contribution
arising
from
the
interactions
between
the
spins,
and
equal
to
H
S_S
=

J
_
i,j
_
i
_
j
.
One
then
defines
the
partition
function
Z
=
_
S
e

H(S)
/
k
T
.,
where
k
refers
to
the
Boltzmann
constant and
T
to the system’s temperature.
In
the
canonical
ensemble,
each
configuration
has
probability
p(S)=
e

H(S)
/
k
T
.
Z
1
:
it
depends
on
the
global
value
of
the
energy.
In
order
to
compute
the
average
value
of
an
observable,
the
trick
of
usual
Monte
Carlo
simulations
is
to
find
out
transition
rules
between
configurations
that
enable
one
to
create
a
list
of
configurations
correctly
sampling
the
canonical
distribution
and
avoid
computing
Z
.
Vichniac
tried
to
find
out
a
CA
rule
sampling
the
canonical
distribution.
It
turned
out
that
it
is
not
as
simple
a
task
as
he
hoped.
In
spite
of
a
striking
resemblance
between
CA
and
the
Ising
model,
there
is
no
obvious
(topology
conserving)
simulation
of
the
wandering
of
Ising
configurations
through
the
canonical
ensemble
with
a
simultaneous
updating
of
all
the
spins at each interaction. Spurious stable maximal energy checkerboard patterns emerge and ruin the simulation.
Other
investigations
have
nevertheless
shown
that
there
is
a
fundamental
relationship
between
d
dimensional
probabilistic
CA
(PCA)
and
(
d
+1)dimensional
Ising
spin
models.
By
averaging
over
all
possible
spacetime
“histories”,
properties
of
the
PCA
such
as
correlation
functions
correspond
to
thermodynamics
averages
in
the
associated
spin
model.
The
link
between
the
geometries
of
CA
lattices
and
of
Ising
spin
models
is
not
as
straightforward
as
one
could
have
wished.
For
example,
the
time
evolution
of
the
onedimensional
probabilistic
CA
system
is
equivalent
to
the
equilibrium
statistical
mechanics
of
a
spin
model
on
a
triangular
lattice
(see
Ilachinsky,
2001,
pp.
343
sq.
for
a
review
and
Domany
and
Kinzel,
1984,
Georges
and
Doussal,
1989
for the original articles).
This
example
shows
that
the
use
of
CA
to
investigate
extended
mathematical
models
is
not
straightforward.
In
the
same
time,
it
illustrates
that
deep
relationship
between
CA
and
other
extended
model
such
as
spin
systems,
neural
networks
or
percolation
models
do
exist,
and
are
being
investigated.
This
confirms
again
that CA are not a shining isolated star in the sky of mathematics (see section 2.1).
3.5 Lattice gas models
We have so far described several cases in which it was the mathematical properties of CA that were
investigated and exploited within mathematical investigations. The potential of CA is not limited to computer
science and mathematics, though. CA can also be attributed remarkable representational capacities and are
investigated as such. We will review several cases in which this representational potential is successfully
exploited and show that there is no single law governing the use of CA in modeling projects.
Our first example (lattice gas models) is drawn from a domain, hydrodynamics, where NavierStokes
continuous equations had been unrivalled as a model for more than 100 years. This model has been studied both
analytically and numerically in the second half of the 20
th
century. From the 1980s on,
lattice gas models
have
been introduced for the study of fluids. Lattice gas models have been both calibrated (Humphreys, 2004, p. 117)
by reference to many already known situations (Hasslacher, 1987) and used independently in new situations
(Doolen, ed. 1991; Rothman and Zaleski 1997 Succi 2001; WolfGladrow 2000).
A major motivation for the use of CA in modeling nonequilibrium fluid dynamic is that it avoids
resorting to artificial (although common) idealization procedures. As Hasslacher (1987) puts it, the discretization
of NavierStokes equations is equivalent to introducing an artificial microworld, with a particular microkinetics.
The particular form of this microworld is fixed by mathematical considerations of elegance and efficiency which
are applied both to simple arithmetical operations and to the particular architecture of available machines.
On the contrary, “discrete fluids”, namely models of fluid dynamics using CA as their basic ingredients,
do not contain, according to their proponents, any “artificial” components. In the simplest CA model in 2D,
namely the hexagonal model, no transformations (discretization) of partial differential equations are necessary.
Conservation of momentum and particle numbers are from the start built into
already exactly computable
collision rules. Together with collision rules and an exclusion principle (forbidding two particles to be at the
same place at the same time), they do the job of reproducing the collective behavior predicted by the
compressible and incompressible NavierStokes equations.
The geometry of the lattice is especially important. In order to see why, let us have a look at how the
collective behavior of the particles is derived (
a
) in continuous models and (
b
) in Lattice Boltzmann Methods
(LBM).
(
a
) Starting from an atomic picture and Newton’s laws, we apply kinetic theory and obtain a probabilistic
description of the motion of particles according to their positions and speeds (namely, their distribution in phase
space). Then we come to Boltzmann Transport Equation, which describes how the probability distribution of the
particles in the phase space evolves. The next step is to use appropriate approximations and ChapmanEnskog
expansion and to compute the integrals
1
corresponding to the different conserved quantities. This is how Euler
and NavierStokes equations are recovered, as well as tensors describing the fluid.
(
b
) One can check in exactly the same way that the same macroscopic NavierStokeslike behavior can obtain in
a lattice gas (except that the derivation is made in a discrete phasespace). This derivation approach makes
clearer how the macroscopic behavior of the fluid depends on the microscopic behavior of particles. The best
example of that dependence is the derivation of the momentum stress tensor
Π
ij
describing and controlling the
convective and viscous terms of the Euler and NavierStokes equations. This tensor must be isotropic (since fluid
motions are isotropic). A square lattice does not contain enough symmetries for
Π
ij
to be isotropic; only a
hexagonal lattice can do that in 2D. By using considerations on tensor structure for polygons and polyhedra in
d

dimensional space, one can also arrive at satisfactory models in any dimension.
This example indicates
i
) that CA based simulation can successfully compete with more traditional
approaches, even though it starting point is utterly different and
ii
) that the evolution of a CA used to represent
the motion of a fluid can not be simply interpreted following the convention “one black cell – one particle”. Such
a convention would be completely misleading. The representational capacities of CA for modeling fluid
dynamics are huge, but they are also hard to tame, and unpleasant surprises are not uncommon.
Let us conclude on this example with one of its most convincing proponents:
“The methods we use to do this are very conservative from the point of view of recent work on cellular
automata, but rather drastic compared to the approaches of standard mathematical physics. Presently,
there is a large gap between these two viewpoints. The simulation of fluid dynamics by cellular
automata shows that there are other complementary and powerful ways to model phenomena that would
normally be the exclusive domain of partial differential equations.” Hasslacher (p.177)
3.6 “Phenomenological” models of complex phenomena
The use of CA in modeling is not always associated with wellconfirmed physical theories or
comparable to other alternative models like in the previous example. CA based simulation can be designed
independently of any underlying theories, but rather from just a few leading assumptions about the target
phenomena. These models, for instance models of car traffic, of forest fires or of snow transport by wind, seem
to have nothing in common, as far as the entities involved and their properties are concerned. However, the use
of CA to study these phenomena reveals that wellknown methods of statistical physics can be efficiently applied
in those cases. For example, these models can be studied using meanfield approximation (Schadschneider and
Schreckenberg, 1993) and dimensional analysis (Nagel and Schreckenberg, 1992). We briefly describe CA based
models of car traffic in the following.
1
For example, you can have an equation describing the balance of energy in a region of phase space. By suitably
integrating over phase space the two members of the equation, you get new macroscopic equations.
The first traffic flow model (Nagel and Schreckenberg 1992) is defined on a one dimensional array of
L
sites and with open or periodic boundary conditions. Each site is either occupied or empty and each cell is
characterized by a value between 0 and
v
max
. The update rule is the following:
 acceleration by one velocity unit if the velocity of a vehicle is lower than
v
max
and if the distance to the next car
ahead is larger than
v+
1
 if a vehicle at site
i
sees the next vehicle at site
i+j
(with
j
≤
v
), it reduces its speed to
j
1
 randomization when stationary patterns occur
 each vehicle is advanced by
v
sites at each time step
This is just a short description of a probabilistic CA rule with a neighborhood of
v
max
+1 nearest neighbors . The
terminology (“vehicle”, “sees”) is just a convenient way to describe a microscopic dynamics between particles
characterized by a position and a speed on a discrete space with density as a varying variable. These basic
properties are sufficient to obtain a phase transition from laminar flow to startstopwaves, namely, to correctly
predict traffic jams in given situations.
Additions to the original model have been made. For example, a twolane model has been studied, as
well as the effect of onelane sudden bottleneck. Breaking or acceleration parameters can be refined by using
empirical data in order to try and explain more refined phenomena such as average traffic in particular cities
(Olmos and Munoz, 2004). These models can be studied using usual techniques of statistical physics such as
meanfield approximation (Schadschneider and Schreckenberg, 1993) and dimensional analysis (Nagel and
Schreckenberg, 1992).
Forest fires have been investigated with similar models. The interested readers can look at Bak, Chen,
and Tang (1990), Chen, Bak, and Jensen (1990), Drossel and Schwabl (1992), Grassberger (2002), Henley
(1989), Henley (1993).
What is striking in these models and other socalled “phenomenological” models is that whereas
predictive power does not depend on any background theory, the phenomena so described exhibit macroscopic
properties that are characteristic of other complex systems, like selforganization. Those properties are difficult
to access to within other approaches of such phenomena.
3.7 Schelling model
Schelling model (1969, 1971) has been proposed to study segregation effects. The representational
framework of Schelling model is the following: individuals belong to two different classes, black and white or,
in more abstract terms, stars and zeros. Each individual is represented to interact with its neighbors according to
the Moore neighborhood pattern and is attributed migration options, allowing for the definition of a local rule.
An individuals leaves its actual neighborhood if it does not have a required minimum frequency of individuals of
its own class. For instance, individuals might wish to live in a neighborhood where their class is not in a
minority. If that requirement is not met, an individual will move to the nearest alternative site where it is. This
rule evolves in such a way as to allow for neat segregation effects (see examples in Hegselmann and
Flache1998), corresponding to observations in certain cities.
The main question about such models is whether they are really capable of a faithful representation of
collective behavior of human beings in spite of the minimalism of their assumptions (encapsulated in the local
rule). Can human behavior be reduced to such a simplistic representation? The empirical success of Schelling
model and its developments seems to show that populations of human beings exhibit behaviors liable of a
statisticalmechanical description. However, taking it seriously is at odds with many psychological and
sociological theories according to which much more complex elements are involved in the explanation of human
behavior. These theories would not consider as appropriate that statisticalmechanical mechanisms have any
relevance to understanding human behavior.
By contrast, Hegselmann and Flache (1998) claim that CA based models cover basic features of a
significant class of social processes, those based on the fact that over a period of time numerous people locally
interact and that the neighborhoods of interaction overlap. Moreover, CA based models of some social
phenomena make it quite clear how certain macro effects are dynamic consequences of decisions and
mechanisms operating only at the micro level. Order, structure, clustering and segregation are all generated by
such micro level rules. Hegselmann and Flache (1998) also emphasize that CA based modeling is not always
simplistic: it is flexible enough to allow for various (narrow or wide) definitions of locality and allows for
investigations in social selforganization and corresponding political or economic manifestations based on the
“invisible hand” paradigm. According to that paradigm, individuals acting in a more or less selfinterested way
will nevertheless produce desirable collective results on the macro level. CA based models of human behavior
can thus be sufficiently enrich to answer fundamental question about societal organization (Epstein and Axtell
1996, Guenther
et al
. 1997).
3.8 Conclusion
The above examples show that CA are involved in a lot of distinct domains, from computer science and
pure mathematics to the “phenomenological” study of complex phenomena, be they natural of social. Their
striking mathematical and logical properties, as well as the powerful visualizations they allow for, should not
hide the fact that in order to have them
represent
the evolution of natural or social systems, their particularities
have to be carefully tamed and the various representational relationships they are involved in have to be no less
carefully established.
The versatility of CA is comparable to other mathematical devices that have been used in mathematics
and in modeling for a longer time, like differential equations or matrixes. Claims have been made, however,
about the distinctive novelty CA are supposed to bring about in science (Rohrlich 1990, Hughes 1999, Fox
Keller 2003). In the following section, we discuss these claims’ legitimacy as well as the implications of the
various uses of CA in modeling.
4. Implications for a philosophical analysis of CA based simulation
This section is focused on the representational capacities of CA: we examine how CA are used in
various modeling and theoretical physics contexts, leaning on the examples in section 3. Modeling phenomena
with CA obeys different constraints than modeling with partial differential equations (PDE). The general
question in this context, according to Hasslacher (1987) is to find a minimal spatial structure enabling the
implementation of Boolean operations and devise a (local) dynamics for it. In contrast to modeling with PDE,
mathematics plays a lesser role than the building up of a rule enabling parallel Boolean operations.
This striking difference is uneasy to grasp because of the usual modeling habits relying on PDE. Is it
enough to claim that modeling with CA is radically different from modeling with PDE? We claim that the
computerscientific tree should not hide the modeling forest. Modeling can be carried out in many different ways
and by means of many different tools. From the modeler’s point of view, using CA instead of PDE makes a huge
difference; however, modeling, as a representational activity, is doomed to use any available means.
Consequently, neither PDE nor CA are consubstantial to it, even in physics.
In order to investigate the relationships between the representational and the computational capacities of
CA, we first examine whether CA based simulation can be called “direct” in any sense. We then make a point
about the implication of “exact computation” as a oftemphasized feature of CA models. In section 4.3 we
compare CA models with other modeling device and assess the virtues of pluralism in modeling. We conclude
section 4 by examining the potential role of CA for fundamental physics. Whereas CA shed light on many
questions about modeling, we have chosen to only discuss a few points about the relationships between modeling
and computation.
4.1 Can simulation be “direct”?
The reason why CA based simulations are sometimes said to be “direct” is mostly that some of them,
for instance simulations of the growth of stars, of traffic flow, of forest fires, etc., do not rely as much on
underlying theories as do other types of simulation, for instance simulations in hydrodynamics. “Direct” in this
sense means “without any detour via a theory usually appealed to in the investigation of the given phenomena”.
In this section, we investigate what is involved in this claim.
The starting point of the relevant cases of CA based simulation is to determine basic local interactions
responsible for the particularities of global behavior in each phenomenon and to try and represent them with a
CA rule in order to obtain the same macroparticularities. The leading question in this enterprise is: “Are there
wellestablished correspondence rules that I can use to translate features of the system I want to model into
specifications for an adequate CA model of it?” (Toffoli and Margolus 1990). So, from a practical point of view,
the problem is to provide an analysis of the studied phenomenon that be suitable for a CA based representation
of it. There is no requirement that the correspondence rules be based on any already available explanatory
theory. More precisely, the question of how one justifies the choice of toberepresented features within the
studied phenomena is an independent question. It can be asked as well for other types of models. In some cases,
the justification of which CA rule is chosen may only be that it yields the right behavior. In other cases, the
justification may be that one had good independent reasons to choose represent such or such features of the
target system and the CA based model turns out to be successful for a wide range of predictions.
Consequently, in these examples of “phenomenological” models, no use is made of any explanatory
theory; by contrast, in the traditional simulations of hydrodynamics, it is the explanatory theory,
i
.
e
., Newtonian
mechanics developed in NavierStokes equations, that is the basis of the implemented model. However, this does
not imply that mechanisms revealed by the CA rule cannot be explanatory. In particular, CA can be useful to
explore the set of behaviors characteristic of “universal physics”, like phase transitions, critical phenomena, scale
invariance, and the like (cf. Batterman 2002, 13). When phenomena exhibit universal properties, it may happen
that the details of the true mechanisms do not matter. In those cases, CA models may capture the “right physics”
(to use Batterman’s expression), as lattice gases do, even if they are not literally exact.
It is thus clear that CA based simulation are not “purely phenomenological”. As emphasized by
Vichniac (1984), “CA do not only seek a mere numerical agreement with a physical system, but they attempt to
match the simulated system’s own structure, its topology, its symmetries, in short its ‘deep’ properties”. CA
based simulations being “direct” in the sense explicated above does not imply that they just reflect the surface of
phenomena. On the contrary, they may reveal other important properties than the ones which are traditionally
used in the modeling activity.
4.2 The virtues of pluralism in modeling
As manifest in our presentation of lattice gas models (section 3.5), different models, including CA
based models, of the same phenomena can be built up. It is sometimes claimed that due to different models, one
can represent different aspects of the same target system; however, in this case, the very same aspects are
represented, in particular the relationships between microproperties of particles and macroproperties of fluids.
We do have genuine alternative models of fluids.
An objection might come to mind at this point. Are not LBM just numerical methods, rather than
alternative models? This objection can be answered as follows. First, as we have seen, LBM models are not
obtained from the numerical study of NavierStokes equations but obtained independently. They are obtained by
taking into account conservation laws and deeper symmetries. Second, lattice gases and LBM require much more
physical theory than pure numerical methods do. Whereas one can teach to a beginner how to apply finite
difference schemes within a few hours, LBM require physicists to fully understand statistical mechanics and to
have deep insights into the significance of symmetries, in particular when spurious invariants have to be detected
and destroyed (for example threeparticle collisions in FHP) or when consequences of missing symmetries have
to be scaled away (for example violations of Galilean invariance).
The differences between models governed by NavierStokes equations and Lattice Boltzmann Methods
are nevertheless important. Here are some about prediction, representation, and computation.
 Lattice gas models are stable, where as in classical simulations of fluids dynamics the code may stop running
because the algorithm becomes unstable.
 Energy and momentum are conserved exactly; no round off error can occur.
 Boundary conditions and particular geometries are easy to implement
 Every single bit of memory is used equally effectively (this has been called “bit democracy” by von Neumann),
whereas in classical simulations every number requires a whole 64bit word. All that one needs to know is
whether or not a particle with a given velocity exists in a given cell. Only 6 bits of information are needed to
completely specify the state of a cell. Consequently, there is no wasting of memory.
 Lattice gas operations are bit oriented vs. floatingpointnumber oriented, and are thus executed more naturally
on a computer.
 The algorithm is inherently parallel.
 More complex models than the simple hexagonal model are required in order to get more accurate results.
However, the computational price of enhanced accuracy may be high. The most important limitation of these
models is that their efficiency is restricted to a certain range of flow velocities. It derives from the fact that all
particles are given the same velocity.
 As the discreteness of the lattice gas introduces noise in the results, noise reduction is another tradeoff case of
computational price vs. accuracy. There is no systematic procedure for noise reduction. Noise is surely a
problem; however, it also ensures that only robust (namely, physical) singularities survive, whereas in standard
codes, which produce less noise, mathematical artifacts can produce singularities.
 Finally, the cognitive cost of switching from PDE to CA is difficult to evaluate. As noted above, understanding
the basic operations is easy, but a deeper understanding requires some deep insight into more fundamental
physics.
Are discrete fluids redundant with respect to classical models? They allow physicists to obtain about the
same results. However, they shed a new light of the mathematical physics of fluid dynamics. When exploring the
class of functionally equivalent models, namely those models with different geometries and rules that produce
the same dynamics in the same parameter change, important features are revealed about the mechanisms
responsible for the qualitative features of hydrodynamics. Robust mechanisms appear more clearly. Overall, as
noted by WolfGladrow (2000, 245246), complementarity, rather than competition, is on the agenda of
hydrodynamics.
4.3 What does “exact computation” mean?
In one of the first reflexive study about CA based simulation, Vichniac (1984) proposes to add a third
alternative to the classical dichotomy between models that are solvable exactly (by analytical means) but are
very stylized and models that are more realistic but can be solved approximately (by numerical means) only.
According to him, certain CA models can be put in the class of “exactly computable models”, for they have
enough expressive power to represent phenomena of arbitrary complexity and at the same time can be simulated
exactly by concrete computational means. In these CA models, the mathematics is exactly implemented and is
thus automatically free of numerical noise. These models are not built up to solve any equations. They merely
perform simple spacedependent decisions. Consequently, according to Vichniac, they are capable of digital non
numerical simulations of physical phenomena.
Are thus CA models a panacea for all problems of roundoff errors, divergence, tractability of
discretization techniques, etc., that are usually encountered in traditional simulations? As indicated above, in
order for a CA model to fill its role, a careful analysis of the studied phenomenon has to be made that should
match the model. Exact computability associated with satisfactory representational ability cannot be the property
of any CA model: exact computability is at best an interesting by product of a well designed model. However,
once adequate “correspondence rules” have been designed, exact computation is of course invaluable.
Exploring the effects the advance of our knowledge of CA may have on the very nature of mathematical
physics, Toffoli (1984) claims that our new knowledge of CA opens the possibility to replace the customary
concepts of real variables, continuity, etc., with more constructive “physicallyminded” counterparts. The
success of differential equations as a starting point for modeling and simulation in physics depends, according to
Toffoli , on the choice of symbolic integration as a main instrument of our understanding of natural processes.
Once this choice has been given up, on the ground that few differential equations have a closedform solution,
there is no reason to keep differential equations as the starting point for modeling.
Differential equations are numerical computed (for instance on generalpurpose computers) in a manner
that is “at least three levels removed from the physical phenomena that they aim at representing: (a) we
stylize
physics into differential equations, then (b) we force these equations into the mould of discrete space and time
and truncate the resulting power series, so as to arrive at
finitedifference equations
, and finally, in order to
commit the latter to algorithms, (c) we project realvalued variables onto finite computer words (“roundoff”). At
the end of the chain we find the computer – again a physical system”. This analysis leads Toffoli to ask whether
there could be “a less roundabout way to make nature model itself?” (Toffoli 1984, 121).
Toffoli outlines an approach where the theoretical mathematical apparatus in which one “writes” the
models is essentially isomorphic with the concrete computational apparatus in which one “runs” them. Starting
from this approach, the few infinities that one may still want to incorporate in a physical theory (e.g., as the size
of a system grows without bounds) are defined as usual by means of the limit concept. However, in this approach
the natural topology in which to take this limit is that of the
Cantor set
, rather than that of the real numbers.
(117118).
Toffoli’s proposal goes very far on the road of the possible effects of exact computability of CA models
on physical modeling. Whereas this approach still remains partly speculative, some important results have been
achieved (cf. Chopard and Droz 1998). For the time being, exact computability should not overshadow the fact
that CA models are delicate constructs whose computational capacities are sometimes difficult to tame – as is
any model using sophisticated mathematics.
4.4 A new kind of physics?
In his 1982 paper, Feynman asked whether “nature, at some extremely microscopic scale, operates
exactly like discrete computerlogic”. He then expressed the guiding line of the (supposedly) new kind of
physics that CA make possible: “So I have often made the hypothesis that ultimately physics will not require a
mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple,
like the checker board with all its apparent complexities”. This hypothesis has been further developed by
Wolfram, Fredkin, Finkelstein, Minsky, Wheeler, Vichniac and Margolus. It had previously been discussed by
Konrad Zuse (see Fredkin 1992 for a review).
Vichniac (1984) characterizes this trend of thought as a “bold approach, in line with the atomistic
tradition that advocates taking discreteness seriously”, that “sees nature as locally – and digitally – computing its
immediate future”. According to Fredkin, the core of the hypothesis is that we assume that there will be found a
single CA rule that models all of microscopic physics, and models it exactly. The general hypothesis of
“crystalline computation” (Margolus’ phrase), according to which CA might be the formalism of a new, discrete
and more faithful physics, is still (almost) purely speculative. Nevertheless, briefly presenting it may help get a
better understanding of the potential of CA in terms of fundamental physics and draw a clear distinction between
what is speculative and what is not about CA.
According to Margolus, the hypothesis is that crystalline arrays of logic, namely CA, might be able to
simulate our known laws of physics “in a direct fashion” (1999). There is some ambiguity in this expression;
however, it may be understood as meaning that using CA to do fundamental physics would prevent us from the
idealizations and approximations that are indispensable when other formalisms are used. The main claim of the
proponents of crystalline computation or digital mechanics is that:
(1) CA may be a better mathematical machinery than continuous differential equations for describing
fundamental physics dynamics.
The main motivation for (1) is that CA allow for exact models. “Exact” means that no approximations
are required. Every discrete formalism is exact in this sense. However, an exact computation may be performed
within a highly idealized representation. Exactness of computation is thus not all what there is to “direct
simulation”. A second claim is contained in the “crystalline computation” hypothesis , namely
(2) CA can represent fundamental laws of physics without using any idealization.
The “crystalline computation” hypothesis is therefore a hypothesis about the
representational
capacities of CA
as well as about their
computational
capacities. Toffoli and Margolus (1990) thus insist that invertible CA are
playing an increasingly important role as
conceptual tools
for theoretical physics.
The main argument in favor of claim (2) is that “the most essential property of CA is that they emulate
the spatial locality of physical law” (Margolus, 1999). “Spatial locality” means that no actionatadistance is
allowed. The spatial locality of CA is as follows. CA computation can be viewed as a regular spacetime crystal
of processing events, namely a regular pattern of communication and logic events that is repeated in space and in
time (Margolus, 1999). Of course, the patterns of data that evolve within the computer are not necessarily
regular, only the structure of the computer is. It seems to be a safe bet to claim that the fundamental laws of
physics are local in this sense. Nevertheless, Margolus’ sentence is ambiguous, since neither the intended
meaning of “emulation” nor the implication of CA emulating spatial locality is clear.
In their 1990 paper, Toffoli and Margolus also insist on the uniformity of CA laws as an argument in
favor of claim (2). Uniformity, according to which the laws of physics are the same at every instant of time, is
another fundamental aspects of the laws of physics. Other properties of CA also justify claim (2) according to
Margolus (1999), namely their ability to represent energy conservation, invertibility, and the possibility that
classical bits capture the finitestate character of a quantum spin system. Margolus thus claims that CA allow one
to simultaneously capture several basic aspects of physics in an exact digital model.
Toffoli and Margolus (1990) conclude from the arguments they give in favor of claim (2) above that a
“strong effectiveness is built in the definition of CA” and argue that continuous dynamical systems make up a
weaker standard for effectiveness. Fredkin emphasizes that it is possible to map properties of a RUCA
informational process onto properties of physics (energy, angular momentum) and have those properties
conserved as a consequence of the reversible rule.
Claims (1) and (3) are linked together by a further one according to which
(3) “Exactly the same general constraints that we impose on our CA systems to make them more like physics
also make them more efficiently realizable as physical devices” (Margolus, 1999).
This third claim is about realizability or implementation of CA computation. According to it, the fundamental
laws of physics are the very laws of efficient computation.
5. A modeling tool among others
The examples in section 3 as well as the discussion in section 4 make clear that CA are neither
revolutionary discovery capable of solving every problem nor just some odd mathematical trick that should not
be paid attention to from a philosophy of science point of view. We have shown that CA are first and foremost a
mathematical tool among others. Partial differential equations, stochastic systems and matrices are other such
tools.
As such a mathematical tool, CA can be expected to be wellsuited to some tasks and less to others;
however, this cannot be predicted before trying. PDE proved to be a particular powerful and easytohandle tool
during the last centuries. CA may prove just as useful in the future. As we have emphasized in this paper, there
are already entire domains and types of phenomena about which CA do better than traditional models (or
computer architectures). In the study of phenomena in which stability matters a lot, CA fare better from a
representational or predictive point of view. This implies that CA cannot be viewed only as a numerical trick
only useful to speed up calculations. Nonetheless, CA can above all be helpful regarding computational issues, in
every case where parallel computation is faster than sequential computation. Many more systems involving local
interactions may soon fall within this category. As illustrated by fluid dynamics, the reason why CA parallel
computations can be quicker is not purely numerical but is deeply rooted within physics itself..
Computational issues involve different distinct questions that should not be mixed up. “What is the less
greedy model to study such phenomenon?”, “What is the most efficient method to solve this model ?”, or “Is it
worth building parallel less versatile computers to perform this task (e.g. climate prediction)?”. These questions
about parallel computation and complexity are still currently studied (Machta, 2006); however, available results
about these issues already show that CA seem to be a tool worth using and investigating. These issues about
complexity and good use of computational resources are quite independent of the much more speculative debates
about fundamental physics that we mentioned in section 4.5. As philosophers we should clearly keep debates
separate.
6. Conclusion
CA are first and foremost discrete dynamical systems performing parallel computation. Their
mathematical properties make up the starting point of any rigorous analysis of their role in modeling and
simulation. They explain why CA can be used to perform computation that are significant for physics or other
scientific investigation. We have argued that accordingly, CA can be put side by side with calculus, since their
formal properties endow them with a remarkable representational capacity. However, we have shown that
generalizations about the possibilities of CA for modeling are hardly justified. CA are used in a great variety of
cases. Further, in spite of the novelty of CA as modeling tools, some of CA based models,
e.g.
lattice gases, are
rooted in traditional physics.
We have also insisted that CA’s computational and representational ability do not make a universal
panacea of them for any kind of modeling situation. CA, like PDE, bear complex relations to theories. CA based
models, like PDE based models, encapsulated theoretical hypotheses the consequences of which they allow to
compute. Such a translation is never a nonproblematic one, be it with CA or with PDE. Moreover, questions
about model’s tractability occur for CAbased models in the same way as they do for classical models. Parallel
computation is not a miraculous for all modeling questions. One virtue of our systematic examination of what is
currently done with CA is perhaps to shed a new light on the relationships between modeling and computation.
References
Badii,
R.,
and
Politi,
A.
(1999)
:
Complexity:
hierarchical
structures
and
scaling
in
physics
,
Cambridge
University Press.
Bak,
P.,
Chen,
K.
and
Tang,
C.
(1990)
“A
forestfire
model
and
some
thoughts
on
turbulence”
Phys.
Lett
.
A
147,
297–300.
Batterman, R.W. (2002)
The Devil in the Details. Asymptotic Reasoning in Explanation, Reduction, and
Emergence
, Oxford University Press.
Berlekamp,
E.
,
Conway,
J.
and
Guy,
R.
(1982)
Winning
Ways
For
Your
Mathematical
Plays
,
Volume
2,
Academic Press.
Chen,
K.,
Bak,
P.
and
Jensen,
M.
H.
(1990)
“A
deterministic
critical
forestfire
model”
Phys.
Let
t.
A
149,
207–210.
Chopard, B. and Droz, M. (1998)
Cellular Automata Modeling of Physical Systems
, Cambridge University Press.
Culik , K., Yu, S. (1988) “Undecidability of CA Classification Schemes”
Complex Systems
, 2, 177.
Domany,
E.
and
Kinzel,
W.
(1984)
“Equivalence
of
cellular
automata
to
Ising
models
and
directed
percolation”,
Phys. Rev. Lett
. 53 311314.
Doolen, G. D. ed. (1991)
Lattice Gas Methods, Theory, Applications, and Hardware
, MIT Press.
Drossel, B. and Schwabl, F. (1992) “Selforganized critical forestfire model”
Phys. Rev. Lett
. 69, 1629–1632.
Dubac,
J.C.,
Durand,
B.,
and
Formenti,
E.
(2001)
“Komogorov
complexity
and
cellular
automata
classification”
Theoretical Computer Science
, 259, 271.
Epstein, J. M. and Axtell, R. (1996)
Growing
Artificial
Societies:
Social
Science
from
the
Bottom
Up
. MIT Press:
Cambridge, MA.
Feynman,
R.
(1982),
“Simulating
physics
with
computers”,
International
Journal
of
Theoretical
Physics
,
21
(6/7), 467488.
Fox
Keller,
E.
(2003)
“Models,
Simulations,
and
‘Computer
Experiments’
”,
in
H.
Radder,.
ed.,
The
Philosophy
of Scientific Experimentation
, The University of Pittsburgh Press, Pittsburgh, pp. 198–215.
Fredkin, E. (1992) “Finite nature,”
Proceedi
ngs of the XXVIIth Recontre de Mori ond
.
Fredkin,
E.
and
Toffoli,
T.
(1982)
“Conservative
Logic”,
International
Journal
of
Theoretical
Physics
,
21,
905
940
Georges,
A.
and
Doussal,
P.
L.
(1989)
“From
equilibrium
spin
models
to
probabilistic
cellular
automata”,
Journal of Statistical Physics
, 54 10111064.
Goldenfeld,
H.
and
Israeli
N.
(2006)
“Coarsegraining
of
cellularautomata,
emergence,
and
the
predictability
of
complex systems”,
Physical Review
E 73.
Grassberger, P. (2002) “Critical behaviour of the DrosselSchwabl forest fire model”
New J. Phys
., 4, 17.
Guenther,
O.,
Hogg,
T.
and
Huberman,
B.
A.
(1997)
Market
organizations
for
Controlling
Smart
Matter.
In:
R.
Conte, R. Hegselmann and P. Terna (eds.)
Simulating Social Phenomena
. Springer: Berlin. pp. 241257.
Hasslacher, B. (1987) “Discrete Fluids”,
Los Alamos Science
, 15.
Hegslelmann,
R.
and
Flache,
R.
(1998)
“Understanding
Complex
Social
Dynamics:
A
Plea
For
Cellular
Automata Based Modelling”,
Journal of Artificial Societies and Social Simulation
, vol. 1, no. 3
Henley, C. L. (1989) “Selforganized percolation: a simpler model”,
Bull. Am. Phys. Soc
., 34, 838.
Henley, C. L. (1993) “Statics of a 'selforganized' percolation model”,
Phys. Rev. Lett
. 71, 2741–2744.
Holland, J. (1975),
Adaptation in Natural and Artificial Systems
, University of Michigan Press, Ann Arbor.
Hughes,
R.I.G.
(1999)
“The
Ising
model,
computer
simulation,
and
universal
physics”,
in
Morgan
and
Morrison
eds. (1999), 97145.
Humphreys,
P.
(2004)
Extending
Ourselves,
Computational
Science,
Empiricism,
and
Scientific
Method
,
Oxford
University Press, New York.
Ilachinski, A. (2001)
Cellular Automata
,
a Discrete Universe,
World Scientific, Singapore.
Langton,
C.
G.
(1990)
“Computation
at
the
edge
of
chaos:
Phase
transitions
and
emergent
computation”
,
Physica
D
42, 12.
Lawniczak,
A.
and
Kapral,
R.
eds.
(1996),
Pattern
Formation
and
LatticeGas
Automata
,
American
Mathematical Society
Machta,
J.
(2006)
Complexity,
Parallel
Computation
and
Statistical
Physics",
J.
Machta,
Complexity
Journal,
11
(5), 4664.
Margolus, N. (1984) “Physicslike models of computation”,
Physica
10D, 8195
Margolus
N.
(1996)
“CAM8:
a
computer
architecture
based
on
cellular
automata,”
in
A.
Lawniczak
and
R.
Kapral eds., p. 167–187
Margolus,
N.
(2002),
“CrystallineComputation”,
in
Feynman
and
Computation:
exploring
the
limits
of
computers ,
A. Hey ed., Addison and Wesley.
Morgan,
M.
and
Morrison,
M.
eds.
(1999)
Models
as
Mediators:
Perspectives
on
Natural
and
Social
Science
,
Cambridge University Press.
Olmos,
L.
E.
and
Muñoz
J.D.
(2004)
“A
Cellular
Automaton
Model
for
the
Traffic
Flow
in
Bogota”,
International Journal of Modern Physics
C, Volume 15, Issue 10, pp. 13971411.
Nagel,
K.
and
Schreckenberg,
M.
(1992)
“A
cellular
automaton
model
for
freeway
traffic”,
Journal
of
Physics
I
France
, 2:22212229.
Rohrlich, F. (1990) “Computer Simulation in the Physical Science”,
PSA1990
, volume 2, pp. 507518.
Rothman,
D.,
Zaleski,
S.
(1997)
LatticeGas
Cellular
Automata,
Simple
Models
of
Complex
Hydrodynamics
,
Cambridge University Press.
Schadschneider,
A.
and
Schreckenberg,
M.
(1993)
“Cellular
automaton
models
and
traffic
flow”,
J.
Phys.
A,
26,
L679.
Schelling, T. (1969) “Models of segregation ”,
American Economic Review
59, pp. 488493.
Schelling, T. (1971) “Dynamic models of segregation”,
Journal of Mathematical Sociology
1, pp. 143186.
Succi, S., (2001),
The Lattice Boltzmann equation
, Oxford Science Publication.
Toffoli,
T
(1984)
“Cellular
automata
as
an
alternative
to
(rather
than
an
approximation
of)
differential
equations
in modeling physics” ,
Physica D
, 10, 117127
Toffoli, T. and N. Margolus (1990) “Invertible cellular automata: A review”,
Physica D
45, 229253
Vichniac, G. (1984) “Simulating Physics with cellular automata”,
Physica D
, 10, 96116
WolfGladrow,
D.
A.
(2000)
LatticeGas
Cellular
Automata
and
Lattice
Boltzmann
Models,
An
Introduction
Springer
Wolfram, S., (2002)
A New Kind of Science
, Wolfram, Media, Champaignn, Ill.
Zurek, H. (1984) “Reversibility and stability of information processing systems,”
Phys. Rev. Lett
., 53, 391.
Comments 0
Log in to post a comment