Artificial Intelligence and the Natural Selection; Decision Trees and the Network Forest

bistredingdongΜηχανική

31 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

72 εμφανίσεις

Artificial Intelligence and the Natural Selection;

Decision Trees and the Network Forest


S. Solomon, E. Shir and S. Kirkpatrick

Hebrew University of Jerusalem

sorin@cc.huji.ac.il
,
eran@cogniview.com
,
kirk@cs.huji.ac.il



Abstract


We present the Internet (and the overlying Web) as a complex system with several layers of emergence. We
suggest the various ways in which one can ded
uce the increasing levels of complexity and (self
-
)organization
one from the other. An effort is made to give enough technical detail to make our approach plausible. Yet the
emphasis is on the challenges that the web poses in principle to our understanding
. The connection with some
previous studies of similar (but simpler) emergent complex systems is made.



1. The Identity Crisis of the Single Computer


When Turing defined the conceptual framework for information processing machines he limited their sco
pe
to the context of rational, deterministic, closed systems. For a long while the computer engineers and
computer scientists were proud of the absolute conceptual and practical control that they had on their
algorithms and machines. Even when randomness a
ppeared it was under well defined rules that still allowed
rigorous proofs and rigorous control. Most of the project managers spent most of the time to eliminate any
possibility that the system would do something else than what explicitly prescribed by the

user
specifications.


As long as the main requests of the society from the computing machines were just to compute, this perfect
order, predictability, reproducibility and their closed self
-
contained nature constituted the very backbone of
the product in
tegrity. However, at some stage, computers became so useful, cheap, powerful and versatile
that the customers required their massive integration in real life situations. Upon throwing the computers
into the real life one opened a Pandora box full of surpri
ses:


-

computers proved to be inexplicably poor in tasks that the AI pioneers have (recurrently!) predicted to be
quite at hand:

Example

: Turing stated more then 50 years ago the famous Turing test: a woman communicates in writing
with a computer and re
spectively with another woman. The objective of the computer is to mascarade as a
woman. Turing's hope was (Turing 1950):



“… in about fifty years’ time … interrogator


will not have more than 70 percent chance


of making the right identification
… ”


Obviously the prediction is far from being fulfilled at the present time.


-

humans either, turned out to be not
-
so
-
good at being … humans.

Example:

Men fail in proportion of 80% the above Turing test. Therefore a computer which would succeed
70%

as predicted by Turing, would be in fact super
-
human (or at least super
-
man) (Adam and Solomon
2003).


-

computers could do better then people, task that humans have claimed to themselves as intrinsically and
exclusively human.


2

Example:

By mimicking on c
omputer the common structural elements of past successful ideas one was able
to construct idea generating algorithms. The computer generated ideas were ranked (by humans) higher than
ideas generated by (other) humans (Goldenberg et al 1999).


-

computers
turned out to do the same tasks that human do in very different ways.


Example:

By analyzing mathematically psychophysical observations one was able to infer the approximate,
quick
-
fix ad
-
hoc algorithms that the visual system uses to reconstruct from 2 D i
mage sequences the 3 D
shapes. They are totally different from the ideal mathematically rigorous 3D reconstruction algorithms. In
fact using this knowledge one was able to predict specific visual illusions (that were dramatically confirmed
by subsequent ex
periments) (Rubin et al 1995).


2. The Social life of computers


Beyond the surprises in the behavior of the individual computers, much more far
-
reaching surprises lurked
from another corner: already 30 years ago, Physics Nobel Prize laureate Phil Anderso
n (1972) wrote a short
note called "More is Different".

The main point was to call attention to the limitations of thinking about a
collection of many similar objects in terms of some “representative individual” endowed with the sum, or
average of their in
dividual properties. This “mean field” / continuum / linear way of thinking seems to fail in
certain crucial instances. In fact there are exactly those instances that emerge as the conceptual gaps between
various disciplines. Indeed, the great conceptual j
umps separating the various sciences and the
accompanying mysteries connected to the nature of life, intelligence, culture arise exactly when "More Is
Different". It is there that life emerges from chemistry, chemistry from physics, conscience from life, s
ocial
conscience/ organization from individual conscience etc.


The physical, biological and social domains are full of higher functions that emerge spontaneously from the
interactions between simpler elements. The planetary computers infrastructure has ac
quired a large number
of elements by now. It is therefore not unexpected that this bunch of man
-
made artifacts would develop a
mind of itself. In fact we do perceive now days that the very basic rules under which computers were
supposed to function are b
eing affected.
Are we facing a "More is Different" boundary?

It might be too
early to decide, but let us make a list of the ways the rules of the game have changed:



-
Instead of being closed systems, the computers came to be exposed to a rapidly changing

environment: not
just some environment dynamics, but a continuous change in the very rules that govern the environment
behavior.


-
Some of the environment consists of other computers, so the potential infinite endogenous loop of
adaptation, self
-
reaction
and competition is sparkled.


-
Instead of a deterministic dynamics, the system is submitted to noise sources (Shnerb et al. 2001) that do
not even fulfill the right conditions to allow rigorous proofs (e.g. in many of the cases, far from being
"normal" th
e noise probability distribution doesn't even have a finite standard deviation).


-
Instead of worrying about the integrity of one project at a time, one has to face the problems related with the
evolution in parallel of many interacting but independently

owned projects, carried out by different teams.



-
Instead of being clustered in a well
-
defined, protected, fixed location (at fixed temperature :
-
), the
computers started to be placed in various un
-
coordinated locations and lately, some of them mounted

on
moving platforms.



3

-
Instead of a single boss that decided which projects go ahead, with what objectives and with what resources,
the present environment is driven by the collective behavior of masses of users whose individual
motivations, interests and

resources are heterogeneous and unknown. Unfortunately psychology was never
one of the strengths of the computer people…


In conclusion
the main change

to which we are called to respond is the passage of computer engineering
from deterministic, up
-
to
-
down

designed and controlled systems to stochastic, self
-
organizing systems
resulting from the independent (but interacting) actions of a very
-
large number of agents.



Main conceptual shifts related to this:



-

The fast evolution of rules /players behavior
make classical AI and game theory tools inapplicable =>
Relying on adaptive co
-
evolution becomes preferable (even though in general there are no absolute
mathematically rigorous success guarantees).



-

Instead of well defined asymptotic states obtainable

by logical trees etc., the pattern selection depends on
spatio
-
temporal self
-
organization, frozen accidents, and a Nash equilibrium might not exists or be
unreachable in finite time.



-

inhomogeneity and lack of self
-
averaging (Solomon 2001) make usual
statistics in terms of averages and
standard deviations unpractical: resulting (Pareto
-
Zipf power) distributions (Levy and Solomon 1996)
might not have strictly speaking an average (and in any case not close to the median) and neither finite
standard dev
iation.



-

systems display phase transitions: dramatic global system changes triggered by small parameter changes.


The situation seems really hopeless. But a ray of light appears:

In their encounters with the real world, CS people encountered occasionall
y quite accomplished (and
sometimes attractive) entities which were not designed (let alone rationally) by anybody.


So may be there is another way to create things beside constructing an exhaustive logical tree that prescribes
in detail the system action
in each given situation.

For lack of another name one could call it "natural evolution".


There is long list of large prices to pay by giving up the "old way": the system might not fulfill well defined
user specifications, might not be fault free and migh
t not be predictable, etc etc



3. Terrestrial Intelligence Search


Almost everybody heard of
SETI@Home
.

SETI@Home

is a project aimed to use the spare CPU power of computers all over the world

in order to
analyze the enormous amounts of data constantly aggregated from radio telescopes and search for evidences
of extraterrestrial messages. This project, initiated after the funding for dedicated computer power to the
project has dried out, has be
come a huge success and is now considered to be one of the strongest super
-
computers ever created.


SETI@Home

essence is to get a hold of as much as possible CPU power, and their architecture is a
client/server one (the clie
nt gets a packet of data to analyze from the server and send the results back to the

4

server. However there is no interaction between peers, no collaborative effort.

The
SETI@Home

phenomenon shows that the psychology and soc
ial behavior of the Internet citizens it is sometimes a more
stable and on
-
going force than the usual internet evolution rules.


GOOGLE and AKAMAI are essential for access and use of global information networks in today network.
GOOGLE attempts to find any
thing that is currently on the Web. AKAMAI pushes information as close as
is justified to its consumers to ensure good performance.



Both GOOGLE and AKAMAI are highly centralized, or depend upon specially selected server complexes
which are themselves ce
ntrally planned, deployed, and managed. The ultimate goal must be the emergence
of fully distributed, invisibly
-

or self
-
managed facilities which take full advantage of a storage, computation,
and communication hierarchy that spans the range from central,

physically secure services enjoying ultra
high bandwidth to the incredible capacity that will exist out at the edges of the network, with intermittent,
lower bandwidths and a division of labor between personal and community activities.


GNUTELLA is a fi
rst attempt at opening up information sharing and access in a purely peer to peer
structure. Overlay networks are the mechanism by which such new functions are brought into use in the
previously existing networks. GNUTELLA exemplifies the new situation
in which the CS problem becomes
intimately intertwined with the social and psychology problems: the various social groups connected to
GNUTELLA or clustered around various (We)blogs could not exist without the internet medium, but on the
other hand have an

independent dynamics and influence back on the present usage and future development
of the web. The Web develops collective clusters that have their own emergent identity, interests, adaptive
behavior, tastes: in short
-

a mind of themselves.


Could it be

that GNUTELLA and its grand
-
children may pass the SETI intelligence tests?


Could it be that upon appropriate study and using enough empathy, we could recognize the Web as an entity
on its own that fully deserves the label terrestrial intelligence?


If
you jump to say no, recall that less then 100 years ago, entire nations and races denied human status to
their peers belonging to other races / nations.



In order to understand the planetary computer infrastructure and the way it may develop in the futur
e we
have to first realize that we are still at the stage at which


we do not know even the basic facts about the internet !


Sure, we know
-

or can find out readily
-

everything about every single element of the system but does the
knowledge of all the el
ectric currents in your TV teaches you anything about the currently running TV
program?


4. The Problems with Studying the Internet


This basic knowledge on the Internet can be achieved only by enhancing very dramatically the scope and
precision the meas
urements of the network topology and traffic. Moreover, one needs good measurements of
the short and long time
-
scale changes of these network characteristics.


Only on the basis of this experimental knowledge one will be able to

-

develop and validate r
elevant models,

-

test ideas on current networks and


5

-

extrapolate potential architectures, protocols and control strategies for the future.

-

search for cognitive emergent functions.


There are
crucial

difficulties
related to Internet studies:



-

The l
ack of knowledge of the details of the network (and the lack of knowledge of which details are
relevant vs. the negligible ones).

-

Centrally measuring the Internet’s topology is an impossible task because of "hidden connections"

(basically, a node A can a
cquire by measurement knowledge on links leading from A to some other nodes B
,
C… , but, in general, not on links between B and C).

-

The impossibility to represent the entire network in one machine:

Creating in a lab a relevant, truthful simulation of ei
ther today's or tomorrow's Internet

is an impossible task
due mainly to the following problems
:


A.

Consistency
-

It is impossible to create artificially a replica of

the Internet, since the growth of the
Internet is an extremely collaborative, distributi
ve

and complex process, which in itself changes rapidly.


Currently there are two

approaches which are used in creating
models of the Internet
.


-

The first

and most popular approach is to "grow" an artificial Internet using a

heuristic algorithm
which ai
ms to capture certain global characteristics of the Internet

such as a scale free structure of the graph
topology.


Such an approach is less than sufficient since we still lack full understanding of the

topological
nature of the Internet, the current

under
standing is in fact misleading in some important aspects
.

-

Another

potential approach is to use the results of a central topology measurement

project and create a
model of the real Internet
.

Again, what is now widely accepted is that extracting the Inter
net

structure using
a central or semi
-
central experiment architecture has significant drawbacks, and the

model created deviates
substantially from reality. In any case, while a rough, still
,
boolean picture of the topology may be taken, it is
obvious that
the message dynamics

cannot be mimicked centrally
.


B.

Scale

-

Creating a large enough model becomes a resource issue when building it

on a machine, or even a
cluster of machines. It becomes impossible and uneconomical

from a certain scale. Thus most resea
rchers
compromise and test their ideas on a less than adequate

simulation model. This approach is clearly in the
danger of missing the essential "More is Different" effect.


C. Adaptability

-

As the

Internet evolves, so should its model. However, understan
ding explicitly each

meaningful change and implementing it into the model may become a Sisyphean task
.


D. Unknown parameters
: connections bandwidth

, switches speed

stacks capacity and software parameters
of each relay.




5. The solution:

The Introspect
ive Internet


THE IDEA


The idea is having the Net measure itself, simulate itself and foretell its own future.


Instead of using a single machine to study the Internet one will use (potentially) all the machines on the
Internet




6

The study of the Interne
t gives us a first opportunity to

use the object of study itself, as the simulation
platform
.


It is told on one emperor of China, that
he had asked his consuls to create for

him
a detailed map of his vast kingdom.

After numerous attempts that did not sa
tisfy

his desire for details and accuracy,
eventually the consuls took the emperor to
the most

accurate model of all, real China
which lies behind the walls.



This story is often told to stress the fact that every model is a simplification of the

object i
t aims to model.
However, our claim is that by using the Internet itself as its

model, we will be able to bridge this gap
between the real object and

its model.




THE METHOD


We propose to use the Internet as the main infrastructure that will simulate

it
self and its future generations:

r
ecruit millions of nodes spread over the entire Net that:

-

Perform local measurements of their Internet neighborhood

-

Participate as message routing/processing elements in collective Internet experiments


One can hope to

harness into the task hundreds of thousands and even millions of users and nodes

in all
hierarchy levels, that will contribute some of their CPU power, Internet

bandwidth, and most importantly,
their topological/ geographical whereabouts on the

net, to th
e effort of monitoring and studying current days
Internet on the one hand, and

simulate and study its future on the other.


Our goal is to create the largest computer simulator ever, one that will be

many scales larger than the second
largest one. But t
his simulator will not merely be a

very large one.


It will also be extra
-
ordinary since each
of its elements will be unique
:

each of the elements will bring not only generic resources such as cycles
,
memory or bandwidth,

it will be
also a true representat
ive of its local net neighborhood. For that matter, it will create a strong connection
between the simulator and its object

of simulation, i.e. reality
.



The Tool: DIMES

(Distributed Internet MEasurement and Simulation)


We propose the creation a distrib
uted platform that will enable:


-

Global scale measurement of Internet graph structure, packet traffic statistics, demography

-

Simulation of Internet behavior under different conditions.

-

Simulation of the Internet future.



We propose creating a unifo
rm research infrastructure to which programmable experiment

nodes could be
added spontaneously
.

Using this layer, large scale statistics aggregation efforts will be conducted. We will
conduct simulations that will study both:


7

-

the reaction of the current

Internet to

various irregular situations and

-

the usability of concrete new ideas, algorithms and

even net physical growth schemes
.


Building the research infrastructure itself is a worthwhile and important project, since it

will take the already
known
and established success of
distributed

computing

projects

such as
SETI@Home

to a new level,
namely,
collaborative

computing
.

Whereas
distributed
computing
distributes small pieces of work among
many machines

which interact only with the central "Work Manag
er",
collaborative
computing

also

enables
interaction algorithms between the machines. On this collaborative computing platform, collaborative
algorithms can be studied and

complex phenomena such as the emergence of global behavior from local
interaction

c
an be exhibited in the most realistic manner
.


DIMES Set Up


The backbone of DIMES@Home will be the distributed network of copies of the
basic

(recursive)
software

agent
.


These
software agents

will form a virtual Internet layer that will sit on top of

th
e physical Internet
.


Similarly to the real nodes of the Internet graph, i.e. routers, the
basic agents

will have virtual

routing
capabilities, which will enable them to forward messages on the virtual layer.



Another important feature of the
basic agent

will be that it will have the ability to simulate

inside it other
instances of
itself
.


This will enable us to integrate seamlessly between large agents that simulate and

represent large internet neighborhoods and reside on strong machines and thin

agents

which simulate a
single node and reside on a home pc on the bottom of the

process priority list
.


6. The Internet as a complex emergent system


If we would be in the situation of studying the human intelligence, one way to organize our efforts would be
a
long an axis starting at the most physical / "hardware" aspects and ending at the most "soft", "humanistic"
ones. Thus this study would fall within 3 different disciplines:



1. biophysics:

studying neurons and their interactions network.


2. psychophy
sics:

studying perception and information processing.


3. psychology:

studying cognitive and motivational processes.


The table below attempts to display the planned study of the internet by DIMES@Home along similar
principles. The
three rows

of the tabl
e (labeled
1., 2., 3.
) capture the three levels at which one can
characterize the Internet:


1. physical,


2. information flow
and


3. emergent / cognitive.



8



TIME
-
>


Concep


Conceptual


Layers |


V


A.


Internet Now



Observatory

B.


New Ideas;


Analytical tools

C.


Future Internet


Mock
-
up

3
. 1.


Cognitive /

Social Layer




Web



Self
-
Organization

Content based

relationships





Space
-

Time

dynamics

of Communities

Emergent Collective


Entities


Emergence of


Complex Institutions



from local interactions


Market mechanisms

Collective Objects,


Their Personality,


Intere
sts, Strategies


Predict emergence /



success / failure of



social / technological


Web waves

2.

2.

3.


4.


Information

Layer


Overlay

networking

GRID and


peer
-
to
-
peer,

Information

transmission

measurements

Web dynamics



Information flow



distribut
ed control

Local protocols


Self
-
healing


Study / Simulate



Statistical stability



and guarantees?


Can trust



be distributed?

3.



Hardware

Infrastructure


Layer

communication

lines

packets and


routers


Find hidden net

elements

and co
nnections


Packets Traffic


Spatio
-
Temporal


Fluctuations


Autocatalytic Agents;


Self
-
Regulated growth


Power Laws,


Small Worlds



K
-
core Percolation



Hardware Evolution:



Mobile nodes,



ad
-
hoc connectivity



Simulate reliability

and

traffic change
s upon


hardware /protocol



modifications








Mea Measurements







Theory







Simulation






DIMES@Home Platform





9

Table 1

Levels of the study of the present and future of the Internet


In other words,


1.

The lowest layer / row
studies the physi
cal / hardware (Internet) infrastructure and the packet dynamics
and the consequences thereof for robustness and traffic.



2.

The

next layer

/
row

represents overlay network (the Web) its logical links and its information flow.


3.

The highest layer / ro
w
concerns the content
-
based self
-
organized communities and the institutions
governing their functioning, the emergence of collective intelligence in addition and on top of the sum of
informational contents of all the nodes.



The horizontal axis represe
nts time
: present (column A) , future (column C) and in
-
between (column B).
Each of these three columns relies on a different aspect supported by the DIMES@Home platform (see the
bottom of the table):

A
-

Internet
present
can be observed by

direct measur
ements

C
-

Internet

future
can be

simulated (
in terms of modifications or evolution of the present situation).

B
-

the connection
between the present and the future relies on

theoretical models.


Let us describe in more detail the cells of the resulting
table:



Row

1. Hardware Infrastructure layer

(closest to the Dimes@Home foundation) represents the most
basic study of the
internet physical infrastructure and packets dynamics.



topology of the Internet connections



1.A

Present Internet (physical) infr
astructure topology and dynamics:


-

discover and measure inter
-
country and local links not previously observable, along with information on
the rate of change of these characteristics



-

measure with high precision the packets dynamics and its spatio
-
t
emporal fluctuations
. Characterize spatial
as well as temporal extent of bottlenecks and delays.



1.B.

Theoretical ideas for describing Internet infrastructure evolution:


-

relation between topology and dynamics (Ben
-
Av and Solomon 1990); scaling and cri
tical slowing down
(Solomon 1995).


-

biologically inspired network growth (Hershberg et. al 2001) mechanisms: autocatalytic agents (Louzoun
et al. 2001), self regulation (Levy et al 1996) , Lotka
-
Volterra systems (Solomon and Richmond 2001).



-

small wo
rld, power laws (Blank and Solomon 2000) , other global topological features (Weisbuch and
Solomon 2002).


-

link and K
-
core percolation (Shalit et al 2003) ; multigrid (Stenhill et al 1994) and cluster (Perski et al
1996).



1.C

The Future of the Intern
et infrastructure geometry / functionality:


10


-

guess the future voluntary / emergent trends
:
mobile nodes, ad
-
hoc nets, active nets, novel routing patterns
and distributed resource allocation.


-

stochastic networks dynamics (Adi et al 1994).


-

finding
solutions for systematic, recurrent failure of elements (without replacing them) (Kirkpatrick).


-

simulate the effect of changes on net reliability, prompt response etc.


-

Clusters (Ben
-
Av et al. 1992) and Multigrid / Multiscale dynamics (Evertz et al 1
991).


Row 2. Information Layer
is expected to teach us where in the web the information is / should be
generated, localized, processed, and transmitted.


2.A

The present Web (overlay networks) dynamics:


-

emergence and evolution of peer
-
to
-
peer networks

including their cooperative mechanisms for information
storage, retrieval, processing and transmission.


-

a question analog to the brain anatomy
-
physiology problem (Rubin et al. 1995b, Adi
-
Japha et al. 1998) is
finding the relation between web geometry,
micro
-
dynamics on one side vs. informational relevance on the
other: what are the most relevant nodes/ links? the ones with the most connectivity, traffic?


-

what are the evolutionary pressures put on the Web by that this geometry
-

functionality connec
tion?
Frustrated systems , spin glasses and cluster algorithms (Persky and Solomon 1996).



2.B

The theoretical studies of the Web:


-

One should be able to produce distributed / local protocols for information transmission (Goldenberg et al
2000) with a c
ertain level of adaptive / self
-

healing features and reasonable security (Weisbuch and
Solomon 2000).


-

Characterize the stability of overlay network local protocols in terms of their typical case in the statistical
mechanics style (Solomon and Levy 200
3) is quite different then worst
-
case analysis that dominated the
mostly Game Theory
-
based treatments until now (when sitting in the corner of a room, there exists the
danger that all the air molecules in the room will spontaneously cluster in an other cor
ner. There are no
documented cases of this worst case actually taking place).


-

It is important to establish to which degree departures from the (stochastically) stable regime are restored
(Persky et al 1995).


2.C

The future of the Web:


-

The simulati
on approach (Solomon 1995) will permit testing the success of distributed control algorithms,
which manage typical behavior, and distributed trust, privacy, and other socially valuable “qualities” of
service, whose underlying graph theories are far more co
mplex. The success of heuristic strategies for
forming peer
-
to
-
peer networks and services which accomplish both economical communications and local
data provisioning can only be tested by this sort of simulation.



11

-

The practical application of such analy
ses to security issues will result in optimal
"immunization"/"security" policies consistent with the "openness"/"information freedom" that peer
-
to
-
peer
networks need.



Row 3. Social / Cognitive Layer

studies higher emergent web faculties:



information,

business and social communities



3.A.

Present collective features:


-

On top of the peer
-
to
-
peer content based connections (
Row 2
), collective self
-
organizing communities
appear quite spontaneously. As in the case of other spontaneous condensation phenom
ena, the exact place
and time at which the percolation transition (Weisbuch et al 2000) takes place may be unpredictable. Yet one
can try to measure first experimentally the space
-
time evolution of these communities and then get
increasing understanding an
d control of their dynamics (
3.B.
).


-

One would hope to characterize in terms of these collective emergent phenomena
(Solomon et al 2000)
the
present connections within and between communities (possibly using the links within Web content clusters
provided

by today’s search engines).


-

Dynamics of Networks of Concepts and the emergence of meaning (Stolov et al. 2001).


3.B.

Theoretical Ideas on Web emergent features:


-

One should explore the potential of market mechanisms (Levy et al. 1994, 1996, 2000)
to design and
manage the web
-
communities "social" / "resources" needs. This is a very open field at present. Both proofs
(
3.B.
) of desirable characteristics and simulations (
3.C.
) of market behaviors (Solomon 1999) would be
useful.

-

The application of si
mple agent
-
oriented models to complex collective phenomena has already certain
predictive successes (Louzoun et al 2000, Muchnik 2003).


-

The self
-
organization potential of the market mechanisms may explain the emergence and many of the
properties of the
net institutions (Louzon and Solomon 2001).



3.C.

Predicting the future social trends and cognitive features of the Web





-

One should endeavor to design network / web relationships not to
be

optimal

but (Mar Or et al. 2003) to
evolve

according to prot
ocols / by
-
laws that self
-
organize them in
optimal adaptive communities

(don't
endow the system with a fish from the beginning: teach it how to fish).


-

One should be able to predict the conditions in which certain global "phase transitions" may take pl
ace and
the ways to trigger, prevent, accelerate, stop or modify them (Weisbuch and Solomon 2000).


Such a program will require the integration between several distinct research communities. In fact such a
group, including scientists from the communicatio
ns, network measurement, distributed computation, and

12

physics of complex systems communities is being established. As the program expands in the later years,
we expect to also involve workers from information theory and multi
-
agent dynamics






Reference
s


R. Adam and S. Solomon , Do Men Pass The Turing Test? in preparation.


E. Adi, M. Hasenbusch, M. Marcu, E. Pazi, K. Pinn and S. Solomon. Monte Carlo Simulation of 2
-
D
Quantum Gravity as Open Dynamically Triangulate Random Surfaces.
Phys. Lett. B 320: 22
7
-
233, 1994.


E. Adi
-
Japha, I. Levin and S. Solomon.

Emergence of Representation in Drawing: The Relation Between
Kinematic and

Referential Aspects.

Cognitive Development 13: 25
-
51 , 1998.



P.W. Anderson. More is Different.
Science 177: 293, 1972.


R.
Ben
-
Av and S. Solomon.

Topology and Lattices.

Int. J. Mod. Phys. A , 5 (2): 427
-
437, 1990.



R. Ben
-
Av, J. Kinar and S. Solomon.

Cluster Simulation of the Ising Model on Dynamically Triangulated
Lattices.
Int. J. Mod. Phys. C 3: 279
-
295, 1992

.


A. Blan
k and S. Solomon.

Power laws in cities population, financial markets and internet sites

(scaling in
systems with a variable number of components).


Physica A 287 (1
-
2): 279
-
288

, 2000.


H
-
G Evertz, M. Hasenbush, M. Marcu, K. Pinn and S. Solomon.

Stochastic

Cluster Algorithms for Discrete
Gaussian (SOS) Models.

Phys. Lett. B254: 185, 1991.


J. Goldenberg, D. Mazursky, and S. Solomon

. Creative Sparks.

Science 285: 1495
-
1496, 1999.



Jacob Goldenberg, David Mazursky, and Sorin Solomon.

Meme's the Word.

Scienc
e 286: 1477, 1999.



J. Goldenberg, B. Libai, S. Solomon, N. Jan and D. Stauffer.

Marketing percolation.

Physica A, 284 (1
-
4):
335
-
347
,
2000.


U. Hershberg, Y. Louzoun, H. Atlan and S. Solomon.

HIV time hierarchy: winning the war while, loosing
all the b
attles.

Physica A: 289 (1
-
2): 178
-
190, 2001.



M. Levy, H. Levy and S. Solomon.

A microscopic model of the stock market; Cycles, booms and crashes.

Economics Letters 45 : 103
-
111, 1994
.


M. Levy, N. Persky and S. Solomon.

The Complex Dynamics of a Simple
Stock Market Model.

Int. J. of
High Speed Computing 8: 93

, 1996
.


M. Levy, H. Levy, S.Solomon,

Microscopic Simulation of Financial Markets; From Investor Behavior To
Market Phenomena
Academic Press, New York, 2000.



13

M. Levy and S. Solomon.

Power Laws are

Logarithmic Boltzmann Laws.

International Journal of Modern
Physics C , 7 (4): 595 , 1996.



Y. Louzoun, S. Solomon, H. Atlan and I. R. Cohen.

Modeling complexity in biology.

Physica A , 297 (1
-
2):
242
-
252 ,2001.


E
.

Mar Or, E
.

Shir and S
. Solomon.
Solvi
ng Traffic Jams: Human Intervention or

Self
-
Organization?

To
appear in
International Journal of Modern Physics.


L. Muchnik, F. Slanina, and S. Solomon. The Interacting Gaps Model: Reconciling Theoretical and

Numerical Approaches to Limit
-
Order Models.
To
appear in Physica A.


N. Persky and S.Solomon.

Collective Degrees of Freedom and Multiscale Dynamics in Spin Glasses.


Phys. Rev. E. 54: 4399, 1996
.


N. Persky , R. Ben
-
Av and S.Solomon.

Dynamical Scaling from Multi
-
Scale Measurements.

Phys. Rev. B.
51:
6100
-
6103, 1995.


N. Persky , I. Kanter and S.Solomon.

Cluster Dynamics for Randomly Frustrated Systems with Finite
Connectivity.

Phys. Rev. E. 53: 1212,1996.


N. Rubin, S. Hochstein and S. Solomon.

Restricted Ability to Recover 3D Global Motion from 1 D
Motion
Signals: Theoretical Observations.

Vision Research 35: 569
-
578, 1995.



N. Rubin, S. Hochstein and S. Solomon.

Restricted Ability to Recover 3D Global Motion from 1 D Motion
Signals: Psychophysical Observations.

Vision Research 35: 463
-
476, 1995
.


A. Shalit, S. Kirkpatrick and S. Solomon. Scaling, K
-
Cores and Networks,
in preparation.



N. M. Shnerb, Y. Louzoun, E. Bettelheim, and S. Solomon.

The importance of being discrete: Life always
wins on the surface.

Proc. Natl. Acad. Sci. USA, 97 (19): 103
22
-
10324, 2000.



S. Solomon.

The Microscopic Representation of Complex Macroscopic Phenomena.


Annual Reviews of
Computational Physics II

pp 243
-
294, editor D. Stauffer, World Scientific 1995
.


S. Solomon.

Stochastic Lotka
-
Volterra Systems of Competing A
uto
-
catalytic Agents. in
Decision
Technologies for Computational Finance
, edited by A.
-
P. Refenes, A. N. Burgess, and J. E. Moody, Kluwer
Academic Publishers, 1998
.


S. Solomon.

Behaviorally realistic simulations of stock markets; Traders with a soul.
Comp
uter

Physics
Communications 121
-
122 : 161, 1999

.


S. Solomon, Generalized Lotka Volterra (GLV) Models of Stock Markets

in "
Applications of Simulation to
Social Sciences
, "
, pp 301
-
322

Eds: G Ballot and G Weisbuch; Hermes Science Publications 2000.


S.S
olomon. Self
-
Organization and Non
-
Self
-
Averaging Effects in

Systems of Discrete Auto
-
Catalytic
Elements pp 351
-
360 in

Multiscale Computational Methods in Chemistry and Physics

.eds. A. Brandt, J.
Bernholc and K. Binder, IOS Press 2001.




14

S. Solomon a
nd P. Richmond.

Stability of Pareto
-
Zipf law in non
-
stationary economies in:
Economics with
heterogeneous interacting agents
,

page 141. ed. by A.Kirman and J.B. Zimmermann, Springer, Berlin
-

Heidelberg 2001


S. Solomon, G. Weisbuch, L. de Arcangelis, N.
Jan and D. Stauffer.

Social percolation models.

Physica A
277 (1
-
2): 239
-
247, 2000.


I. Stenhill, S. Solomon and K. Wolowelsky.

Dynamical Algebraic multi
-
grid simulations of free fields on
Random Triangulated Surfaces.

Computer Physics Communications 83:
23
-
29, 1994
.


Y. Stolov, M. Idel, S. Solomon.

What Are Stories Made Of? Quantitative Categorical Deconstruction of
Creation.

International Journal of Modern Physics C 11 (4): 827
-
835 ,2000.


A.M Turing, Computing Machinery and Intelligence,
Mind 49 (19
50) 433,



G. Weisbuch, S. Solomon.

Self
-
organized percolation and critical sales fluctuations

.
Int. J. Mod. Phys. C
11: (6) 1263
-
1272 , 2000
.



G. Weisbuc
h , S. Solomon and D. Stauffer.

Social Percolators and Self
-
Organized Criticality

in:
Economics
with heterogeneous interacting agents
,

p. 43 ed. by A.Kirman and J.B. Zimmermann
,

Lecture Notes in
Economics and Mathematical Systems
,
Springer, Berlin
-

Heide
lberg 2001
.



G. Weisbuch and S. Solomon. Social Percolators and Self Organized Criticality. in
Handbook of Graphs and
Networks: From the Genome to the Internet
, pp. 113
-
132 eds. S. Bornholdt and H. G. Schuster, Wiley
-
VCH,
Berlin, 2002.