Brain-Like Artificial Intelligence:
Analysis of a Promising Field
INRIA Rhône-Alpes, Montbonnot Saint-Martin, France
Brain Like Artificial Intelligence,
Hybrid Cognitive Architectures
Human-Level Machine Intelligence
This paper aims at giving a quick overview of the field of Brain-Like Artificial
Intelligence and analysing its potential and the implicit possibility of its use as a tool for
resolving well-known problems, such as creating Ambient Intelligence. The nascent
taxonomy of cognitive architectures related to Brain-Like Artificial Intelligence is also
reviewed, and a further-detailed way of classifying such cognitive architectures is
proposed, with special attention paid to their internal structure and dynamics.
The general objective of Artificial Intelligence
(AI) as a branch of computing science is to make
computers behave like humans , or at least allow them
to do things which would require intelligence if done by
humans . While this was the original idea leading to
the creation of the field more than 50 years ago,
reproducing human-level intelligence is today
considered as the possibly most complex problem
science has ever faced.
Nowadays, artificial intelligence has undoubtedly
become an essential part of technology and industry by
automating a large amount of tasks which used to be
done by human beings before, and by providing
solutions to incredibly complex problems in computer
science. However, regarding its original purpose, that is
to say creating intelligence at the level of human or
above, Strong AI - the field of artificial intelligence
concerned with creating intelligence matching or
exceeding human capabilities, also referred to as
Artificial General Intelligence (AGI) - has not succeeded
yet and perhaps never will . These days, AI
researchers are capable of creating computer
programmes able to carry out tasks difficult even for
humans such as logic, problem solving, path planning,
playing chess etc., but yet almost incompetent at
developing a programme which can compete with some
of the simplest of a four-year-old's achievements, for
instance, perceiving its environment, expressing itself,
reacting to somebody's behaviour or taking everyday life
decisions. Former approaches, principally following a
mathematically and algorithmically guided path, have
been focusing on duplicating and enhancing some very
particular human skills or behaviours, but identifying
and reproducing the inner structures and fundamental
principles allowing the brain to process information in
such a way as to emerge with those skills or behaviours
has for the most part been ignored and left to the
discretion of cognitive science, mostly cognitive
psychology and neurobiology. In other words, while
such programmes might end up doing better than
humans in well-defined and very specific situations, it is
yet clear that they might not be capable of efficiently
processing widely diversified or highly context-related
information the way the human brain does, the major
factors of failure of the usual rule-based approach being
the richness and the unpredictability of the considered
environment, especially when it is a natural
Brains and computers have yet very little in
common, and in order to achieve a major breakthrough
in the field of artificial intelligence, which could result in
drawing AI-based programmes closer to actual brains, a
paradigm shift might be necessary . Thanks to very
recent discoveries in both neurobiology and cognitive
psychology, the novel research field of Brain-Like
Artificial Intelligence has appeared, and seems to
suggest one way out of this dilemma [3, 4].
2. A NOVEL FIELD, BRAIN-LIKE INTELLIGENCE
This part aims at giving the reader a basic
understanding of what the field of Brain-Like Artificial
Intelligence consists in, by explaining its basic concepts
and the interactions between it and existing subfields,
structures or ideologies.
2.1 Basic Concept
As is outlined in Part 1, when artificial intelligence
today mainly provides methods and tools allowing to
focus on very specific points in the realm of capabilities
of a human brain - doing well for a problem which is
sufficiently well structured and of controlled complexity
-, assimilating brain functions to a black box without
precisely knowing what is inside has proved to be a
limiting factor to creating general-purpose intelligence,
if not a complete obstacle.
The research field of Brain-Like Artificial
Intelligence is concerned with the development and
implementation of the inner structures, concepts and
models allowing the human or animal brain to process
effectively diversified, contextual, complex and
overwhelming quantities of information.
2.2 Basic Dogma
The basic idea of Brain-Like Intelligence is highly
intuitive, it is a question of not only replicating the
results provided by a human dealing with a particular
problem given some input data, but also reproducing the
structure and dynamics inside of this individual's brain
leading to the emergence of such results, summarised in
 as using the brain as archetype for AI model
While easy to comprehend, it is now clear that this
task is anything but simple to implement, and yet we
might not have sufficient knowledge on how the brain
works to achieve that goal properly.
It is however a great step forward to realise that
the old fashioned approaches might restrain the research
efforts leading towards general-purpose artificial
intelligence, and therefore that brain scientists and
computer engineers ought to work together in order to
greatly ameliorate the progress made in both disciplines.
Also, even though we are still very far from
understanding thoroughly how the brain operates, both
cognitive science and neurobiology have recently come
up with interesting and utilisable results forming a fertile
basis for Brain-Like Artificial Intelligence. It is through
these crisp discoveries that the field of Brain-Like
Artificial Intelligence has recently been in a position to
A more complete version of the paradigm used
throughout this paper could be phrased as follows :
It is well appreciated that the human brain is the most
sophisticated, powerful, efficient, effective, flexible and
intelligent information processing system known.
Therefore, the functioning of the human brain, its
structural organisation, and information processing
principles should be used as archetype for designing
artificial intelligent systems instead of just emulating
its behaviour in a black box manner. To achieve this,
approaches should not build on work from engineers
only but on a close cooperation between engineers and
2.3 Relevant AI Subfields
While the basic ideology of Brain-Like
Intelligence clearly differs from other Artificial
Intelligence currents, along with its methodology and
goals, this subfield and some of the other ones yet share
a lot of common ground. In fact, there are no clear
demarcations between most of the sub-disciplines, and it
would be wrong to think of Artificial Intelligence as a
branch of computer science which can be divided into a
very specific number of sharply defined and
complementary components. One could even argue that
such a unique and unanimously accepted taxonomy
doesn't exist yet; there is indeed no clear consensus on
that point from the concerned scientific community. It is
therefore serviceable for scientists dedicated to the study
of Brain-Like AI to be aware of the commonalities
between this subfield and the other ones, and keep a
close watch on the possible contributing results that
might emerge from such subfields.
The classification considered in this chapter is
the one proposed by R. Velik in , conveniently
dividing Artificial Intelligence into sub-domains
according to their ideology and goals.
· Applied Artificial Intelligence
Applied AI is concerned with creating
programmes capable of "intelligently" handling
problems in very specialised areas. One of the most
successful and representative form of such
programmes are Expert Systems.
Although being one of the most potent subfield
of Artificial Intelligence, Applied AI shares almost
nothing with Brain-Like AI, willing to achieve
overall intelligence, and could be defined as partially
· Artificial General Intelligence
As mentioned in Part 1, Artificial General
Intelligence is concerned with creating programmes
which show general-purpose intelligence, such as -
but not limited to - what the human brain does, as
opposed to Applied Artificial Intelligence.
This subfield completely embraces Brain-Like
Intelligence, which could be defined as a specific
instantiation of AGI, restraining its methodology to
creating global intelligence by modelling the
functioning of the brain exclusively.
Victim of its vastness, AGI has only drawn little
attention over the last decades. While a small
number of scientists are today active in AGI
research, Brain-Like AI might just be a version of
AGI sufficiently narrowed to render it all more
· Embodied Artificial Intelligence
Embodied Artificial Intelligence is concerned
with studying how intelligence emerges as a result of
sensorimotor activity, constrained by the physical
body and mental developmental programme .
This subfield is connected to Brain-Like
Intelligence in the way that perceiving the
environment is a necessary part to the design of an
artificially intelligent agent, as well as the capability
of interacting with or reacting to this environment.
In his article , S. Potter also strongly suggests that
the shape and the physical composition of the brain
defines and alters its capacities, and that recreating
Brain-Level Intelligence might first of all need to
create a similar substrate of equal complexity.
· Bio-Inspired Artificial Intelligence
Bio-Inspired Artificial Intelligence is concerned
with using biology in algorithm construction,
studying how biological systems communicate and
process information, and developing information
processing systems that use biological materials or
are based on biological models .
Once again, this subfield of AI completely
embraces Brain-Like Intelligence, the brain being a
biological unit processing information. This field
also suffers the tremendous vastness of what it
focuses on, since the level of abstraction at which
nature serves as model has great variance. The
contribution of Bio-Inspired AI to the task of
modelling the functioning of the brain has been very
limited so far .
2.4 Cognitive Architectures
Cognitive architectures are architectures
defining the dynamics of a given model, model
according to which cognition or intelligent behaviour
can be recreated, or at least get somewhere close to its
recreation. Such architectures are meant to formalise and
lead to the implementation of the actually considered
system. Cognitive architectures are usually either
biologically-inspired (commonly referred to as BICAs),
or psychologically-inspired, most of the time related to
respectively the emergentist approach or the symbolic
approach, to be discussed below.
It is important to keep in mind that all cognitive
architectures are not necessarily attached to Brain-Like
Artificial Intelligence, being slightly more narrow. When
a cognitive architecture is designed to try to endow an
agent with cognition or intelligence, the brain is just an
instantiation of the success of that achievement, which
does not imply that replicating a brain is the only way to
create cognition. In other words, there are some models
which do not focus on the brain's functioning and do not
intend to, but still are cognitive architectures. However,
in this paper, little difference is made between cognitive
architectures specifically related to Brain-Like AI and
the other ones, since the statements made can be
generalised to both.
3. DETAILED STUDY OF BRAIN-LIKE TAXONOMY
Two complementary and highly competing
approaches have tended to bisect the scientific
community of Artificial Intelligence for number of
years, the Symbolic and the Connectionist approaches
(the latter commonly referred to as the subsymbolic
approach or, especially in Brain-Like AI, the
emergentist approach). When it is extremely likely that
no one of these models can fully address the entirety of
Artificial Intelligence problems, both still own very
devoted groups of researchers.
Each principle has its strengths and weaknesses,
and it has now become common knowledge that
combining both approaches most presumably yields
better results than sticking to a particular one. In that
regard, a plethora of hybrid architectures have already
emerged, making the most of both, but yet the influence
of those approaches are still very present throughout
Artificial Intelligence, Brain-Like AI not making
This section aims at explaining, illustrating and
digging further into the existing classification of Brain-
Like AI., first by briefly talking about the two prime
currents which are Connectionism and Symbolism, and
then by considering a somewhat more precise
classification shading the yet so-called "hybrid"
In , Duch proposes a basic graphical
representation of this simplified taxonomy (see Fig. 1
3.1 Emergentist and Symbolic Approaches
A venerable tradition is AI focuses on the
physical symbol hypothesis , stating that minds
exists mainly through the manipulation of symbols that
represent aspects of the world or themselves . A
more recently established paradigm, Connectionism,
resulted from various dissatisfactions with symbol
manipulation models, not being able to efficiently handle
flexible and robust processing .
· Symbolic Paradigm
The field of AI, since its inception, has been
conceived mainly as the development of models
using symbol manipulation. The computation in
such models is based on explicit representations that
contain symbols organised in some specific ways
and aggregate information is explicitly represented
with aggregate structures that are constructed from
constituent symbols and syntactic combinations of
these symbols . A physical symbol system has the
ability to input, output, store and alter symbolic
entities, and to execute appropriate actions in order
to achieve its goals .
The idea of symbolic systems is in fact highly
intuitive and often used without noticing. To take a
simplified example, consider the letters in the
alphabet as symbols. Writing one of these letters, in
a word, for instance, would make the written letter a
token, i.e. an instantiated symbol. Putting symbols
(or expressions) together would form expressions,
which have a proper meaning and can be altered
using a set of rules.
· Connectionist Paradigm
The connectionist paradigm (also called
subsymbolic paradigm, or emergentist paradigm
especially when applied to Brain-Like AI) aims at
massively parallel models that consist of a large
number of simple and uniform processing elements
interconnected with extensive links. In many
connectionist models, representations are distributed
throughout a large number of processing elements
The idea of connectionist systems is somewhat
less intuitive. To take a simplified example, imagine
a directed graph composed of three sorts of units:
input, hidden and output units (see Fig. 2 ).
Every input unit, as the source of the edge, is
connected to a certain number of hidden units, as the
destination of the edge. Those hidden units are
connected the same way to others hidden units or
output units. A unit basically computes its own inner
value using an internal function taking into account
the value of the connected units for which this one is
a destination. Input units have a starting value. The
set of values of output units encodes a piece of
Architectures following the symbolic approach are
usually psychology-based (or inspired), stating that
cognition is a high level phenomenon, supposed to be
fundamentally independent from low level mechanisms,
i.e. there are theoretically plenty of different substrates
An illustration of a simplified neural net.
Fig 1. Duch's simplified taxonomy of cognitive architectures.
which could lead to the same level of cognition, the
same way there are plenty of different hardware which
can support the exact same operating system in computer
This approach, applied to Brain-Like Intelligence, is
usually categorised as the Top-Down approach, being
about the recreation of the brain's functioning from what
cognitive psychology has witnessed and discovered, and
then trying to dig further into details.
Architectures following the connectionist
approach are usually biologically-based (or inspired),
stating that intelligent behaviour emerges from the
sophistication and the complexity of the substrate the
brain - or other natural models - represents.
This approach, applied to Brain-Like Intelligence, is
usually referred to as the Bottom-Up, or emergentist
approach, being about the recreation of the functioning
of the brain, starting from the most detailed view,
generally using neural networks as a basis.
Abstract symbolic processing is expected to emerge
from the complexity of such networks along with
Symbolism, being initially designed to allow an
accurate and efficient representation of information, has
not proven to be very effective at learning, especially
incremental learning, creativity, procedure learning, and
episodic and associative memory [9, 10]. This is notably
related to the fact that symbolic models handle very
poorly noise and inconsistencies in data, a rule-based
approach mostly needing a well-defined and stable
environment with which to interact.
Connectionism, on the other hand, despite its
difficulties in achieving efficient data representation, is
particularly good at learning, especially by increments,
and handle very well noises and unexpectedness in data.
It is also strong at recognising patterns in high-
dimensional data, reinforcement learning and associative
memory [9, 10].
Although we are not completely sure at which
level of abstraction intelligent behaviour stems from yet,
many researchers believe that a Bottom-Up approach is
very likely to come up with prominent results in the
coming years. However, if the emergentist architectures
seem to propose a great potential, no one has yet shown
how to achieve high-level functions such as abstract
reasoning or language processing by purely using this
3.2 Modular Architectures
This part intends to propose a further detailed
taxonomy for the field of Brain-Like Artificial
Intelligence, whose research community mostly agree on
the previous one exclusively, that is to say a Symbolic -
Emergentist - Hybrid distinction.
Another way of classifying Cognitive Architectures
could be by analysing their modularity, as defined in 
by R. Sun (see Fig. 3). Systems, and more specifically
Cognitive Architectures, can be divided into two broad
categories: Single-Module and Multi-Module
A module herein represents some part of a cognitive
architecture dedicated to representation, learning or
processing, or any combination of such parts, up to
an entire cognitive architecture in itself.
Fig. 3. A proposed taxonomy of Brain-Like Artificial Intelligence Cognitive Architectures.
Modularity will be further defined as allowing
modules from possibly different cognitive
architectures to be associated with one another and
form a whole set capable of handling tasks which
couldn't be handled by a single one of the original
cognitive architectures, or would have been in a less
· Single Module Architectures
Single Module architectures are developed
around one module, thus representing the
architecture itself in its entirety. For Cognitive
Architectures, it can be either Emergentist or
Symbolic, both explained in part 3.1. Following this
classification, purely Emergentist architectures can
however be divided into two distinct sets,
considering the way they internally represent data:
the Localist and the Distributed representations.
In the localist representation, the encoding of
familiar entities, such as letters, words, concepts,
and propositions is made by individual units ,
meaning that one distinct node represents each
In the distributed representation, such entities
are encoded by alternative patterns of activity over
the same units, such that each entity is represented
by the activity of many units and each unit
participates in representing many entities .
· Multi Module Architectures
Multi Module architectures are developed
around multiple modules, such modules representing
either entire or parts of single module architectures.
Then, a further distinction in the system can be
made, being either the object of a homogeneous or a
Homogeneous multi module systems are similar
to Single Module systems, apart from the fact that
the considered architecture or parts of it are
replicated multiple times, in order to create
redundancy for various reasons .
Heterogeneous multi module systems represent
the truly hybrid systems, incorporating modules
from several distinct architectures. Those systems
will be the object of the next section (see 3.3).
3.3 Hybrid Architectures
Finally, in order to benefit from the strong points
and overcome most of the weaknesses of the two
complementary paradigms, hybrid architectures have
been developed, integrating features of systems attached
to both approaches and drawing the attention of an
increasing number of researchers. But when all agree
about the existence of an hybrid sub-current, yet little
effort has been made towards a further categorisation,
being the point of this section. A variety of distinctions
in hybridisations can be made, and different
combinations of such distinctions can emerge .
· Differences in representation
A first distinction can be made in terms of
representation of constituent modules. Heterogeneous
multi module architectures can integrate modules from
both emergentist and symbolic systems, the former
possibly having a localist or a distributed
representation. It is of course possible to bring together
any number of modules, not being limited to two.
CONSYDERR is a cognitive architecture taken
as an example in . It consists of two levels; top and
bottom level, the former being a network with localist
representation and the latter being a network with
distributed representation. The localist network is
linked with the distributed network by connecting each
node at top level - representing a concept - to all the
nodes at bottom level representing the same concept.
The model is then capable of both rule-based and
similarity-based reasoning with incomplete,
inconsistent and approximate information.
· Differences in coupling
Another distinction can be made in terms of
coupling of modules. Modules can be either loosely
coupled or tightly coupled, affecting the way each
module communicates with each other.
Loosely coupled modules mainly communicate
through shared files, shared memory locations and
message passing. Components of loosely coupled
modules do not tend to communicate directly with
each other, and each module is usually seen as a
black box, leaving traces of its processing or passing
results in order to communicate rather than using
function calls, if communicating at all .
PolyScheme, ACT-R and CLARION seem to be
cognitive architectures that are loosely coupled .
Tightly coupled modules communicate through
multiple channels such as various function calls
provided by each module. It is even possible to
partially merge modules with other ones, allowing a
vast range of inner interactions, such as in
CONSYDERR where node-to-node connections can
be found between each node of the two distinct
modules. An even further going tendency could lead
to the creation of atomic elements which are both
symbolic and subsymbolic in nature. DUAL, Shruti,
LIDA, OpenCog and MicroPsi seem to be cognitive
architectures that tend to be tightly coupled.
In his article , B. Goertzel argues that
loosely coupled systems might lack rich, real-time
interaction between the internal dynamics of various
memory and learning processes, which could be an
obstacle in achieving human general intelligence.
· Differences in cooperation
Finally, another distinction can be made in terms
of cooperation between the different modules.
Modules can follow different cooperation patterns,
such as pre or post-processing vs. main processing,
master/slave relationship or equal partnership.
In a pre/post-processing vs. main processing
form of cooperation, one module would perform the
main process whereas another module would
perform a pre-processing or post-processing (or
both), such as transforming input data or rectifying
In a master/slave form of cooperation, a module
would have global control of the task being handled,
and call other modules when needed for some
specific sub-task treatment. As an example, an
(symbolic) expert system could have a rule invoking
neural network processing for some decision-making
In an equal partnership form of cooperation, all
the modules would roughly have the same
importance and be either called for handling a
specific task only one of those modules is capable of
handling, demonstrating complementary processes,
or contextually called for handling a task other
modules could also handle but in a different manner,
demonstrating processes which would be structurally
different but functionally equivalent.
This paper has been written with the aim of
providing further details on the novel field of Brain-Like
Artificial Intelligence for the research team PRIMA
located in the INRIA - Rhône-Alpes research centre.
Amongst the many goals of this team, one particular
objective is to create Ambient Intelligence for Smart
Spaces. The point of this article was to give further
information about whether the use of Brain-Like
Artificial Intelligence is or is not a possible way to
achieve such a goal.
In order to briefly summarise the former analysis,
Brain-Like Artificial Intelligence is definitely a field
coping with some of the most state-of-the-art cognition
models and cognitive architectures, and should it not be
capable of providing one perfectly fitted to the setting up
of ambient intelligence for smart spaces yet, it surely
will at a subsequent time. But, when it would be wise to
keep a close watch on the advancements of this
promising field, yet very few tools for analysing and
comparing the performances of particular cognitive
architectures exist, rendering it hard to truly know what
to expect from the use of those.
I thank my tutor Patrick Reignier for allowing me
to work on this project, his support, and insightful
comments on the present paper. I also thank the entirety
of team PRIMA for their warm welcome.
 Wall, B. (2009)
Artificial Intelligence and Chess.
 Velik, R. (2012)
AI Reloaded: Objectives, Potentials, and Challenges of
the Novel Field of Brain-Like Artificial Intelligence.
BRAIN. Broad Research in Artificial Intelligence and
Neuroscience, Vol 3, No 3.
 Sendhoff, B.; Körner, E.; Sporns, O. (2009)
Creating Brain-Like Intelligence
Creating Brain-Like Intelligence: From Basic Principles
to Complex Intelligent Systems.
 Velik, R. (2010)
Quo Vadis, Intelligent Machine?
BRAIN. Broad Research in Artificial Intelligence and
Neuroscience, Vol 1, No 4.
 Minsky, M. (1968)
Semantic Information Processing.
Cambridge: MA, MIT Press.
 Smith, L. (2005)
Cognition as a Dynamic System: Principles from
 Potter, S. (2007)
What can Artificial Intelligence get from Neuroscience?
 Forbes, N. (2004)
Imitation of Life: How Biology is Inspiring Computing
Cambridge MA, MIT Press.
 Sun, R. (2000)
Artificial Intelligence: Connectionist and Symbolic
 Goertzel, B.; Lian, R.; Arel, I.; de Garis, H.; Chen, S.
A World Survey of Artificial Brain Projects, Part II:
Biologically Inspired Cognitive Architectures.
 Garson, J.; Zalta, E. (2010)
The Stanford Encyclopedia of Philosophy (Winter 2012
 Duch, W.; Oentaryio, R.; Pasquier, M. (2008)
Cognitive Architectures: Where do we go from here?
Proceedings of the Second Conference on AGI.
 Plaut D. (1999)
Encyclopedia of Psychology.