Using the Notion of Meaning Potentials for the Analysis of the Semantics of Interaction of Speech and Gestures

topsalmonIA et Robotique

23 févr. 2014 (il y a 3 années et 1 mois)

173 vue(s)

CHRIST’S COLLEGE CAMBRIDGE Embodied Language II 2
-
4 September 2013


ABSTRACTS


------------------


Using the Notion of Meaning Potentials for the Analysis of the Semantics of Interaction of
Speech and Gestures


Jens Allwood & Elisabeth Ahlsén

University
of Gothenburg, Sweden


The paper addresses the question of what and how gesture and speech, respectively, contribute
to the interactive co
-
construction of meaning. A point of departure is the notion of “meaning
potential” which we apply to both unimodal vo
cal verbal units and gestures, as well as to
multimodal vocal
-
gestural units,
Allwood (2003)
. The aim of the paper is to explore the different
types of meaning potential and how they interact with each other in creating actual contextually
relevant meaning
.


A series of studies were made of gesture and speech in video
-
recorded spoken interaction in
different social activities, such as first acquaintance interactions, political debates and informal
discussions and narrations. The studies focus on the semant
ic contribution of speech and gesture
and on how they are interpreted both in isolation and in combination. Subjects were shown
unimodal auditory or visual stimuli as well as multimodal audiovisual stimuli, in order to isolate
the features contributed by t
he different modalities and to study how they can interact through a
multimodal combination of meaning potentials. Both naturalistic and experimental stimuli were
used. Findings from the studies include how features, such as degree of abstractness
-
concrete
ness, various aspects of action versus object/entity orientation and affective
-
epistemic
states are coded and interpreted in relation to the meaning potential of a gesture. Examples of
results from the studies include how level of abstraction is attributed

to isolated iconic gesture
stimuli. Another result is that in action
-
related words or phrases, like
a step

versus
to step

or
ladle,

the action orientation is somewhat more likely to be rendered in an accompanying iconic
gesture than the object or entity o
rientation, and that this tendency is not as strong in iconic
gestures produced by persons with aphasia, whose iconic gestures are more often object or entity
related and are also, possibly as a consequence of this, somewhat easier to interpret (Ahlsén and

Allwood, 2013).

The results are discussed in relation to the question of how speech and gesture are related in
interaction. Theoretically, this question relates, for example, to evolutionary and cognitive
perspectives on embodiment and multimodality in co
mmunication (cf. Allwood, 2008).
Practically, it relates to application areas, such as rhetoric, the design of embodied communicative
agents (Allwood and Ahlsén, 2009) and compensatory strategies for communication disorders
(Ahlsén, 2011).



References:

Ah
lsén, E. (2011). Towards an integrated view of gestures related to speech,
Proceedings of the
3rd Nordic Symposium on Multimodal Communication. Editors: Patrizia Paggio, Elisabeth
Ahlsén, Jens Allwood, Kristiina Jokinen, Costanza Navarretta. NEALT Proceedi
ngs Series
. 15
(2111) s. 72
-
77.

Ahlsén, E. & Allwood, J. (2013). What’s in a gesture? In Proceedings from the Fourth Nordic
Symposium on Multimodal Communication, Göteborg 15
-
16 Nov, 2012. (Forthcoming in
NEALT Proceedings Series)

Allwood, J.

(2003). Meani
ng Potential and Context. Some Consequences for the Analysis of
Variation in Meaning. In Cuyckens, Hubert, Dirven, René & Taylor, John R. (eds).
Cognitive
Approaches to Lexical Semantics
. Moulton de Gruyter, pp. 29
-
65.

Allwood, J. (2008). Dimensions of Emb
odied Communication
-

towards a typology of embodied
communication. In: Ipke Wachsmuth, Manuela Lenzen, Günther Knoblich (eds.)
Embodied
Communication in Humans and Machines,

Oxford University Press.

Allwood, J., & Ahlsén, E. (2009). Multimodal Intercultur
al Information and Communication
Technology
-

A framework for designing and evaluating multimodal interc ultural
communicators. In Kipp, M., Martin, J.C., Paggio, P. & Heylen, D. (Eds.). Multimodal Corpora.
From Models of Natural Interaction to Systems and

Application.
Lecture Notes in Computer
Science. Lecture Notes in Artificial Intelligence
. Berlin: Springer. pp. 160
-
175


Email::<
eliza@ling.gu.se
>
<Jens Allwood
jens@ling.gu.s
e
>


---------------


The Natural Origin of Language: The Neuroscientific Base


Robin Allott

Seaford UK


Nullius in Verba. This was and is the foundation motto of the Royal Society in England. The
literal translation is "Of No
-
one in Words". "Do not rely
on the words of any single individual".
The motto has been given different interpretations and applications. In science no individual can
or should be treated as the unquestionable authority. Science cannot advance by speculative
verbal theorising but mus
t be based on observation, experiment, validated by independent
repetition of experiments. Science cannot progress by
diktats
. In the study of speech and language
there have been
diktats

-

in the 19th century a French philological society banned discussion

of
the origin of language. A hundred years ago Ferdinand de Saussure declared that language in its
origin is arbitrary, words are symbols, conventional cultural constructs. Today this remains the
orthodoxy for much of mainstream linguistics. It was the
implicit basis for Noam Chomsky's
assertion, decades ago, that in the study of language the emphasis had to be on the structures of
syntax. A third application of Nullius in Verba has become relevant with remarkable advances
in neuroscientific experiment
al techniques. These go with a shift of emphasis from language as a
mental construct to speech and the motor aspects of language. There has also been the much
-
delayed recognition of gesture and speech as parts of a single brain system With this, and with

articulatory gesture matching patterns of bodily gesture (Haskins Speech Laboratories work),
there is a special difficulty of presentation. Presentation can hardly be simply verbal, simply
written. The relationships have to be shown. Hence the videos in
the presentation leading up to
the graphics from Graziano's research at Princeton into the categorical structuring of stimulated
hand and arm movements in the monkey cortex. The neuroscientific and evolutionary base of
language
-

before syntax, before lexi
con
-

has to be speech sounds, the most certain examples of
language universals. Yet, as Darwin in the Descent of Man could not have known, our speech
sounds are part of a classic evolutionary development, in full continuity with the bodily structure
and
motor organisation of our animal ancestors.


Email: rmallott@eclipse.co.uk

------------------


Language and Movement


Alain Berthoz

Colle
ge de France


Email: <alain.berthoz@college
-
de
-
france.fr>


Embodied Language and the Evolutions of Number and Gender


B
ernard H. Bichakjian

Radboud University, Netherlands


Whereas the authors of studies on Embodied language endeavor to show that given linguistic
features correlate with associated physical activities, linguists focused on the evolution of
languages observe

a steady disengagement of linguistic features from their original physical
sources. The features that will be discussed here are grammatical number and gender, an
apparently necessary distinction for one, an oddity, when it exists, for the other; yet, the
re is more
than meets the eye. The reduction of gender from three to two in cases such as the Romance
Languages and its total disappearance, as in English, are well known and taken for granted. The
reduction of grammatical numbers from three to two is also

known and acknowledged, be it by a
smaller group. These two sets of empirical data prompt two pressing questions: Why were
linguistic systems endowed with grammatical genders, especially for inanimate items, when
languages can do very well without them? A
nd, likewise, why three numbers when two
apparently suffice? These are indeed legitimate questions and the answers must be sought in the
sources of these features and the vision and modus operandi of the speakers who invented them.
The uncovered data will
suggest that the evolutions of gender and number are a shift away from
embodied language and, in these cases at least, an advantageous move.


Email:
<BHB@Post.Harvard.edu>


------------------


Language, Affordances, and the Bodily Space


Anna M. Borghi

Uni
versity of Bologna


Recent theories propose that words index objects, their referents, and evoke their affordances, and
that meaning is constrained by relationships between object affordances (Glenberg & Robertson,
2000). However, it is currently debated w
hether affordances are automatically activated, or
whether the physical and social context influences their activation (e.g., Borghi, Flumini, Natraj
& Wheaton, 2012; Ellis, Swabey, Bridgeman, May, Tucker, & Hyne, 2013). I will overview some
recent experim
ents aimed at investigating how language modulates the relationships between
object affordances and the bodily space.

Several lines of evidence suggest that near space holds a separate functional value compared to
'far' space (i.e. space beyond arm’s reac
h). It is now established that observation of objects
located within reaching space activates a representation of related actions. In the first two studies
(Costantini et al., 2011; Ambrosini et al., 2012) participants were presented with 3D everyday
objec
ts, located at different distances from the body, and were required to judge whether
observation, manipulation, function and pointing verbs (e.g., “to look at” “to grasp”, “to drink”,
“to point”) were compatible with the observed objects. In the second stu
dy participants were
required to provide explicit estimates of the distances between the objects and the body. Across
the two studies, response times were faster with function and manipulation verbs than with
observation and pointing verbs when objects wer
e located in the near space. Importantly, with
both function and manipulation verbs participants were faster when objects were presented in
actual than the perceived reaching space.

Overall, results indicate that during verb comprehension a simulation is
formed, and that objects
are represented in terms of potential (and possible) actions. The dissociation between actual and
perceived reaching space suggests that the simulation built during verb comprehension reflects
the real dynamics of actions, rather t
han the way in which objects and distances are explicitly
represented, and the real body characteristics, rather than the way in which the body is
represented. The implications of the results will be discussed arguing that affordances are not
automatically

activated but flexibly modulated by the physical context (near vs. far space with
respect to the body) as well as by the linguistic context.

I will then report some kinematics studies (Scorolli, Daprati, Nico and Borghi, in preparation)
aimed at showing
that words can be intended as tools that render the far space closer. The
implications of the results for embodied and grounded cognition, as well as for extended
cognition, are discussed (Borghi, Scorolli, Caligiore, Baldassarre & Tummolini, 2013).


Emai
l: anna.borghi@gmail.com


---------------------------

Human Evolution and Perspective: Is the Embodiment Relevant for Other Minds?


Tatiana Chernigovskaya

St. Petersburg State University, Russia

Human cognition came after human anatomy and was based on
it. We need human neuronal
mirror system for language and social interaction, still more


for learning itself: mirror neurons
code actions, sounds, gestures, face and voice qualities to express emotions allowing us to
understand intentions of other peopl
e. The ability to observe and comment our own behavior is a
basis for reflection


probably the only human specific feature left accepted after years of
anthropological and ethological studies of cognitive faculties. Embedding and recursion in
syntax, quot
ing and Theory of Mind develop since autonomous vocal language has arisen in
Africa from a genetic mutation between 100,000 and 250,000 year ago. The human fossil and
archaeological records indicate that symbolic consciousness is not the culmination that n
atural
selection would predict. Instead, it shows that major change has been episodic and rare and that
the passage from non
-
symbolic to symbolic cognition is relatively recent and unprecedented.
Fully syntactical language is an essential requisite to sha
re and transmit the symbolic meaning,
and this can be satisfactorily done in automatic speech processing
-

however, only for simple if
not artificial texts. When we deal with complex information in natural surroundings we face
not only vagueness of lan
guage and cognition but vagueness of the world itself causing
ambiguity. There are many layers that sub
-
serve interpretation: anaphoric and deictic factors,
similar pictures of the world, cultural background, intonations, gestures and face expression,
sen
se and type of humor, etc. Human language itself has it as its basis together with innate
linguistic and cognitive faculties. Body and 1
st

person experience are surprisingly underestimated
when we discuss cross
-
species interaction and homomorphic AI syst
ems that are supposed to
replicate human mind and behavior crucial for relevant communication. To deal with
other minds
we should have the shared context, definitely based on compatible embodied cognition. To
minimize ambiguity and non
-
transparency causing

communicational collapse one should think of
bridging the potential gap between humans and other minds


animate or artificial.


Email: tatiana.chernigovskaya@gmail.com


------------------


The Role of Theory in the Neural Sciences


G.J. Dalenoort

(Gron
ingen, The Netherlands)


The basic concepts and theories in the domains of the neurosciences are much more intuitive and
based on common sense than in sciences like physics and biology. The designs of experiments are
often inspired by the availability of
specific methods of measurements. This has the disadvantage
that it is hard to compare different experimental findings, since there is no common theoretical
ground for the large variety of experimental results. There is some analogy to biology before
Darwi
n (and others) provided us with the general theory of evolution, that allowed us to see the
evolutionary origin of the differences between species of plants and between different species of
animals. In physics (and astronomy) the role of theory is still mu
ch more dominant, given the
availability of very general theories, from which specific theories and models can be derived that
generate precise predictions that in turn inspire experiments. Some examples of theoretical ideas
of the neurosciences will be d
iscussed that may be used in a more systematic manner than often is
the case; as well as a comparison between physics and other sciences in the ways properties are
assigned to systems.


Email:
<g.j.dalenoort@xs4all.nl>

---------------



Neurophysiology of
Speech Act Processing


Natalia Egorova

MRC
Cambridge, UK


Although language is a tool for communication, little is known about the brain mechanisms of
speech acts, or communicative functions, for which words and sentences are used as tools. In a
series of

EEG, MEG, and fMRI experiments, in which participants observed communicative
interaction, the time course and the brain areas involved in processing the speech acts of Naming
and Requesting expressed with single word utterances, were investigated in both,

blocked and
event
-
related designs.

The results showed that Naming speech acts, placing the emphasis on language
-
object referential
links (Damasio et al., 1996), activated the semantic network, left angular gyrus and bilateral areas
in the temporal cortex

to a larger extent than Requests. By contrast, there was more activation in
the fronto
-
parietal areas to the Request speech acts, which can be explained by the involvement of
the mirror neuron (inferior frontal gyrus, motor cortex, left anterior intrapari
etal sulcus, right
posterior temporal sulcus) and the theory of mind (medial prefrontal cortex, anterior cingulate,
bilateral temporo
-
parietal junction) systems in understanding the action (Pulvermüller and
Fadiga, 2010) and social interaction knowledge (F
ogassi et al., 2005) along with the associated
assumptions of the communication partners (Saxe, 2010, Van Overwalle and Baetens, 2009),
relevant for this speech act. Consistent with earlier reports (Weylman et al., 1989, Zaidel et al.,
2000), both hemisphe
res were active in speech act processing.

The differences between the pragmatic speech act types were first observed within 200 ms after
the word onset, preceding or taking place in parallel with the access to semantic information.
These early speech act
discrimination is likely to be subserved by the mirror neuron system,
followed by the additional social inferencing between 200 and 300 ms supported by the theory of
mind network.

References:

Damasio H, Grabowski TJ, Tranel D, Hichwa RD, Damasio AR (1996
). A neural basis for lexical
retrieval. Nature 380:499
-
505.

Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, Rizzolatti G (2005). Parietal Lobe: From
Action Organization to Intention Understanding. Science 308:662
-
667.

Pulvermüller F, Fadiga L (20
10). Active perception: sensorimotor circuits as a cortical basis for
language. Nat Rev Neurosci 11:351
-
360.

Saxe R (2010). Theory of mind (neural basis). In: Encyclopedia of consciousness(Banks, W. P.,
ed): Academic Press.

Van Overwalle F, Baetens K (20
09). Understanding others' actions and goals by mirror and
mentalizing systems: A meta
-
analysis. NeuroImage 48:564
-
584.

Weylman ST, Brownell HH, Roman M, Gardner H (1989). Appreciation of indirect requests by
left
-

and right
-

brain
-
damaged patients: The e
ffects of verbal context and conventionality of
wording. Brain and Language 36:580
-
591.

Zaidel E, Kasher A, Soroker N, Batori G, Giora R, Graves D (2000). Hemispheric contributions
to pragmatics. Brain and Cognition 43:438
-
443.


Email: <natalia.egorova@mr
c
-
cbu.cam.ac.uk>

----------------


Comparative Cognition : Why Embodiment will Not Solve the Problem


Tecumseh Fitch

University of Vienna


Email:
<tecumseh.fitch@univie.ac.at>

------------------


Embodied Semantics: The Patterning of Physical Action Verbs


Helena Hong Gao

Nanyang Technological University Singapore



In this talk I am going to present my investigation of the semantic structure of the lexicon of
physical action verbs. Chinese physical action verbs (PA verbs) are a target source for the
analys
is. A comparison is made between Chinese, English and Swedish. The purpose of the study
is to demonstrate the links between linguistic structure and human cognition. From the theoretical
approaches in linguistics, cognition, and psychology, the central arg
ument in my discussion of the
relationships between language construction and human bodily action is the view that the event
structures of physical action verbs are not arbitrarily constructed but rather built through
systematic cognitive processes in rela
tion to both human physical reality and the concrete reality
in the world.

I will first give definitions of the semantic domain of PA verbs and then follow the principles and
definitions to illustrate the semantic features of the PA verbs that depict the
actions involving
different human body parts. Conceptual notions such as Motion and Contact seen as the basic
components in PA verbs will be illustrated with a comparative analysis of the lexicalization
patterns in English and Chinese. In the analysis of t
he event structures of PA verbs, the same key
features, such as, Motion, Contact, and Force, etc. that are typical of most PA verbs, are applied
to guide the analysis. Prominent semantic features of the verbs depicting different body part
actions will be d
iscussed in relation to their meaning extensions, near
-
synonyms and syntactic
features. The various changes of the verbs’ event structures will be presented as a display of the
influence of human cognition, perception and experience, rather than simply tak
ing them as
syntactic properties from a traditional grammarian’s point of view.

In addition, the differences in depicting intentional bodily contact for expressing positive and
negative emotions will be compared to show that the intensity, speed, and force

of the contact
embedded in the PA verb semantics are a reflection of human cognitive understanding of the
physical sensations and emotional reactions caused by various types of physical contactEmpirical
research on how young infants acquire PA verbs will
also be presented to support the argument
that children’s conceptual construction of the different senses depicted by PA verbs and the
infants’ physical capability are independently developed in one way and interrelated in another.


Email: helenagao@ntu.e
du.sg




Simulating Cortical Processes of Word Learning and Spontaneous Emergence of Intentions
to Speak in a Neurocomputational Model of Frontal and Temporal Areas


Max Garagnani

Free University of Berlin and University of Plymouth



Recent experimental

evidence suggests that motor and sensory cortical areas are engaged not
only in the execution of action and in the perception of physical entities, but, critically, also in the
processing and comprehension of
words

that are used to refer to such actions a
nd entities. In line
with this evidence, embodied semantics theories typically postulate that the emergence of strong
links between the brain correlates of “phonological” (auditory
-
articulatory) word forms and
meaning is the result of the cortex’s ability
to arbitrarily
associate

patterns of activation that
repeatedly co
-
occur in distinct (sensory and motor) areas. While this is a plausible assumption, a
working computational model is still needed to mechanistically demonstrate that, and explain
how
, exactl
y, the postulated principles of associative learning


which should reflect biological
processes known to occur in the cortex, and must act within specific anatomical structures


can
induce the emergence in the cortex of both

word
-
form representations (au
ditory
-
articulatory
links) as well as links between these representations and corresponding semantic ones.
Furthermore, verbal or diagrammatic theories fail to generate precise,
quantitative
predictions
about the topography and dynamics of brain responses
to linguistic input (e.g., to newly learnt
meaningful words vs. unknown material), predictions that are needed for a theory
-
driven
approach to experimental research, and which only a formal / computational model can provide.

We implemented a
neuroanatomic
ally grounded model of the left perisylvian (“language”)
cortex aimed to simulate and explain, at cortical level, processes of word learning as they are
believed to occur in motor and sensory primary, secondary and higher association areas of the
inferior
frontal and superior temporal lobes of the human brain. Mechanisms and connectivity of
the model aim to reflect, as much as possible, functional and structural features of the relevant
cortices, including spontaneous (baseline) neuronal firing, local (with
in
-
) and long
-
distance
(between
-
area) cortical connectivity, and well
-
known mechanisms of synaptic plasticity (long
-
term potentiation and depression).

I will briefly review how this model can (1) simulate the spontaneous emergence of memory
traces for wor
ds as distributed, strongly connected neural circuits (Hebbian “cell assemblies”)
whose activation behaviour replicates and explains known neurophysiological responses to
meaningful words and unfamiliar pseudowords, and (2) mechanistically illustrate why t
he neural
processes underlying the spontaneous emergence of decisions to speak originate from higher
-
association prefrontal and posterior
-
superior temporal areas (and not from the primary motor
ones, involved in the execution of the articulatory movements)
. Finally, I will describe an
extended architecture, based on the same functional principles but augmented with motor and
visual semantic areas, which we have used to investigate brain processes of novel meaningful
word learning. Preliminary results obtain
ed with this model show how the “
grounding” of the
meaning of novel action and visual words in motor and perceptual systems can be explained
purely as mechanistic consequence of neurobiological principles, anatomical structure and
sensorimotor experience.


Email:

mg421@cam.ac.uk


------------------


Language Comprehension Warps the Mirror Neuron System


Arthur Glenberg

Laboratory for Embodied Cognition Wisconsin
-
Madison


Abstract: Is the mirror neuron system (MNS) used in language understanding? Accord
ing to
embodied accounts of language comprehension, understanding sentences describing actions
makes use of neural mechanisms of action control, including the MNS. Consequently,
repeatedly comprehending sentences describing similar actions should induce

adaptation much
like repeating similar literal actions. Because the MNS plays a role in both production and
perception of action, then adapting it should warp perception of actions. After reading multiple
sentences describing transfer of objects away
or toward the reader, adaptation was measured by
having participants predict the end
-
point of videotaped actions. Short
-
lived adaptation of visual
perception was produced by the sentence comprehension task, but a) only for videos of biological
motion, an
d b) only when the effector implied by the language (e.g., the hand) matched the
effector in the videos. These findings are signatures of the mirror neuron system.


Email:
<aglenber@asu.edu>


------------------


Ori
gin of Language as a Preadaptation in Ho
minins


Harry J Jerison

Department of Psychiatry and Biobehavioral Sciences, UCLA.


I review new evidence to update my published views on language origins (Jerison, HJ,"Brain Size
and the Evolution of Mind," James Arthur Lecture 59, American Museum of Na
tural History,
1991). Briefly, my published view was that language began as a cognitive adaptation to navigate
a then new environmental niche: a territory covering deforested margins of a forest range and
adaptations to navigate that broader range. Prio
r adaptations of a primarily herbivorous
chimpanzee
-
like creature required knowing only a few hectares of forest in which to graze. The
new adaptation was to a wolf
-
like social
-
carnivore niche in which animal prey are gathered across
a range measured in k
ilometers. The cognitive requirement is to know the larger range by
marking and remembering the marks and their location. I argued that a normal primate lacking
good cognitive olfactory cues has available visual and tactile cognitive cues adequate for sm
all
ranges. Wolves as normal social carnivores in an enlarged range, work with olfactory cognitive
cues. The hypothesis was that with a reduced olfactory channel, typical of all living primates, the
available auditory vocal channel was exploited by the e
arliest hominins for marking and sensing
features of a broader environment. That auditory vocal channel as a cognitive system developed
into the beginnings of language for navigating the range communally. We now have new
evidence that baboons have succes
sfully invaded an enlarged range, primarily for finding routes
to water but also for occasional social hunting. Their olfactory systems are the normally reduced
primate systems, and their cues are not well known. The drying of the periphery of the normal

range of late Pliocene hominins is now recognized as more complex than I originally suggested.
Finally, further analysis of the ecosystem of very early hominins, in particular
Ardipithecus
, also
requires a closer look at the data on ecosystems of four or

five million years ago. The basic
scenario remains acceptable, and it can now be extended and refined by providing a more detailed
analysis of its dimensions.


Email: <hjerison@ucla.edu>


------------------


“Representing, Fast and Slow”, for Abstract an
d Concrete Concepts: are there Differences
in Embodied Connotations?

Yanina Ledovaya



POSTER

Saint Petersburg State University

Two years ago in Oxford we presented the results of a qualitative psychological study that
showed the differences in pictographi
cal representations for concrete and abstract concepts, both
for a brief sketch and for a pictogram that reflected the core characteristics of a concept (thus 4
sets of pictograms were analyzed). We gave two concepts: one concrete


“DESSERT” and one
abstr
act


“IDEA”. The results in terms of the frequencies of kinaesthetical connotations in the
pictograms showed that an abstract concept “IDEA” was represented in a more “sensorimotor”
way than a concrete concept “DESSERT”. Also the depictions of the
core ch
aracteristics

of a
concept revealed more unique and embodied features than the
brief sketches

which elicited more
conventional and commonplace, “prototypical” images (cf. Rosch, 1974). One probable
explanation is that the two regimes of representing


the
brief and the thoughtful one


are “fast”
and “slow” in terms of D. Kahneman. And the latter gives more embodied connotations, as
“primary metaphors” (cf. Grady, 1997) are more deeply built in personal experience than
stereotypes.

This time we suggested th
ree concepts for pictographical representation: two abstract


scientific
“ENERGY” and social “RESPONSIBILITY” and one concrete


“SOIL” (“ground”).
Participants again pictured the object(s) that conveyed
the first impression

of a concept
(pictograms were
named “En1”, “Resp1”, “Soil1”) and then


they pictured the object(s) that
conveyed
the core idea

of a concept (pictograms were named “En2”, “Resp2”, “Soil2”). Each
task took 1 minute. The qualitative analysis was made upon the pictograms that had been dra
wn
by 18 students aged 19
-
21. After the previous experience, we studied the images twice: first, in
terms of the depicted topic, and second, in terms of the type of “primary metaphor” (cf. Grady),
or “force dynamic” (cf. Talmy).

Analysis of the topics show
ed that #1 pictograms more often depicted commonplace images, or
“visual prototypes” (28% for En1 are suns, stars, fireworks), (65% for Resp1 are humans), (22%
for Soil1 are dark areas on the paper, another 22% are plants and animals on the ground). #2
pic
tograms were distributed more variously: “visual prototypes” decreased, in abstract concepts
“human” depictions grew, as well as more “visual narratives”, or “scenarios” (cf. Musolff)
appeared; they reflected the “stories” “telling” about the representatio
ns of the core features of the
concepts, and they were more unique.

The analysis of “primary metaphors” let us put all images into the following categories:
“interactions and dynamics”, “development and changes”, “humans”, “beaming” and “none” (if
an imag
e depicted something static or inanimated). Almost the same “primary metaphors” were
found in the depictions of the concepts “IDEA” and “DESSERT” in our previous study. So, they
may be universal. In this study we did not repeat the previous effect


that t
here were more
knaesthetical elements in #2 attempt, when depicting
core characteristics

of the concept. We
explain it by the specificity of the stimuli: “ENERGY” is too active in its literal, surface meaning,
“SOIL” is less concrete than “DESSERT”, “RESPO
NSIBILITY” proved to be a very difficult
task to depict for our subjects.


Email:
<ledovaya@gmail.com>


---------------


Virtual Sociable Agents Match Human Trainers in Learning Foreign Language


Manuela Macedonia

Max Planck Institute for Human Cognitive a
nd Brain Sciences, Leipzig


Laboratory research has repeatedly demonstrated that verbal information is better memorized in
quantity and over time if it is accompanied by gestures during encoding. We explore whether
gesture
-
supported learning of words in a
foreign language can be successfully achieved with a
virtual agent with human traits that can enunciate the items to be learned and perform gestures in
a similar way as a human. Further, we pursue whether training with a virtual agent elicits the
same resu
lts as training with a human trainer. In a within
-
subject design, 32 subjects learned
45
novel words

in Vimmi, an
artificial corpus created for experimental purposes. The words
conformed with Italian phonotactics and were assigned common meanings such as
b
ridge

and
necklace
. They were equally distributed across three training conditions: a baseline (participants
heard the words and read them aloud) and two conditions in which participants were additionally
cued to imitate the gestures performed either by th
e virtual agent in one case or by a human actor
in the other. The iconic gestures mirrored some feature of each word’s semantics that was chosen
by the experimenter. Participants


memory performance was assessed with recall tests (one free
and one cued) at

different time points. In the free recall test, memory performance was best for
items trained with the virtual agent in the short range (day 01
-
03). Further, on day 30, training
with gestures with both the virtual agent and a human trainer had a significa
nt impact on memory
performance for novel words compared to the baseline. High performers learned significantly
better with the virtual agent than with the human trainer. In the cued recall test, we found
significant effects of training on day 30 of the ex
periment. Again, training with the agent yielded
significantly better results for high performers. Overall, this experiment confirms previous studies
on the enhancing effect of gestures on memory for verbal information. Furthermore, the data
show surprisin
g results concerning the impact of the virtual agent on performance, particularly for
high performers. Our study shows for the first time that humans can successfully be trained by a
human
-
computer interface when they are learning novel language items. We
discuss possible
applications in second language instruction and in other domains such as language rehabilitation
(371 words).


Email: manuela.macedonia@jku.at


-----------------


Gesture

Speech Unity or Embodiment


Its Origin in Human Evolution


David Mc
Neill

University of Chicago


Minimal packages of language embodiment have been called growth points(GPs). In a GP

gesture and speech are inherent and equal parts. Out of a GP comes speech orchestrated

around a gesture. Can theories of language origin exp
lain this dynamic process? A popular

theory, gesture
-
first, cannot; in fact, it fails twice


predicting what did not evolve (that

gesture was marginalized when speech emerged), and not predicting what did evolve (that

there is gesture

speech unity). A
new theory, called Mead's Loop, is proposed that meets the

test. Mead's Loop agrees that gesture was indispensable to the origin of language but holds

that gesture was not first, that any gesture
-
first could not have led to language, and that to

reach i
t gesture and speech had to be “equiprimordial.”


Email:
dmcneill@uchicago.edu


------------------


The boundaries of Babel (or flesh becomes words)


Andrea Moro

Institute for Advanced Study, IUSS
-

Pavia. Ital
y


Email: "Andrea Moro" <andrea.moro@iusspavia.it>


------------------


Sensorimotor Semantics in Autism: ‘Disembodiment’ and Category
-
Specific Impairments

Rachel Moseley

MRC

Cambridge, UK

The involvement of sensorimotor systems for the processing of actio
n
-
related language has been
robustly demonstrated in neuroscience and neuropsychology. More recent research has
demonstrated a critical role for motor circuits in the representation of abstract emotion words,
too: as the only visible referents of an intern
al feeling, emotional actions are suggested to bridge
the gap between word and meaning for these abstract concepts. What, however, is the precise
role

of such activation: does it reflect a functionally important stage in the retrieval of meaning, or an
ep
iphenomenal by
-
product of activity elsewhere? In order to address this question it is necessary
to investigate the effects of motor disease or lesions upon semantic processing, and therefore the
representation of action words and abstract emotion words was

explored in individuals with
autism spectrum conditions (ASC), a population characterised by structural and functional
abnormalities of cortical motor systems. Using a range of methodologies including fMRI,
EEG/MEG and behavioural testing, abnormalities w
ere indeed revealed in this population for the
processing of both action
-

and emotion
-
related words, both of which are ‘disembodied’ from
cortical motor systems in comparison to typical controls. This inactivity in motor systems during
action and emotion w
ord processing appears to correlate with greater number of autistic
symptoms; furthermore, motor inactivity during action word processing correlates with a
category
-
specific semantic deficit. The results of these studies and their implications for the role

of sensorimotor systems in the representation of these concepts will be discussed.

Email:
<Rachel.Moseley@mrc
-
cbu.cam.ac.uk>

------------------


Motor Primitives: a Physical Vocabulary for Action and Learning


Ferdinando A. Mussa
-
Ivaldi

Northwestern Unive
rsity and Rehabilitation Institute of Chicago


It is widely recognized that to perform even the simplest actions, such as reaching for a glass of
water, the brain must carry out complex computations. But what does it mean to compute?
Engineers can write pa
ges of mathematical symbols to represent the dynamics of a robotic arm. I
will argue that while the brain must solve similar problems, the symbol system that it uses is not
composed of trigonometric functions and arithmetic operators, but instead it is a
vocabulary of
dynamical primitives defined by the neuromuscular apparatus. Studies of motor learning suggest
that this embodied vocabulary has compositional properties that allow the construction of two
related objects: control policies and internal models
.

A control policy defines a desired action as a negotiation with the environment: it generates an
action
-

i.e. a force
-

as a function of the state of motion of a limb and of the current goal. In
mathematical terms a control policy is a force field. A
number of studies have shown that the
concurrent activations of multiple muscles generate force fields acting at the interface between
limb and environment. Fields have the property of combining by vector summation to yield other
fields. The size of the r
epertoire that can be generated in this way depends critically on the
geometrical and mechanical structure of the "primitive" force fields in the vocabulary.

The combination of force fields from a finite vocabulary also leads to the representation of the
dynamical properties of the controlled limbs and environment. In this way the brain may form
adaptive internal models that allow to transport across a variety of mechanical environments the
family of actions that we learn through time, without need for re
learning each of them.


Email: sandro@northwestern.edu

-----------------


Involving the Body in Sentence Comprehension: Action
-
Sentence Compatibility Effects in
British Sign Language and Written English

Pamela Perniss




POSTER

(authors are: Vinson, David;

Fox, Neil; Perniss, Pamela; Vigliocco, Gabriella)

University College London


A wide variety of studies have highlighted a central role of bodily experience in language
comprehension (e.g. Barsalou 2009; Meteyard et al. 2012, for review). In a notable stud
y,
Glenberg & Kaschak (2002) asked English speakers to judge the sensibility of sentences like
“Andy delivered the pizza to you”, “You communicated the message to Adam”, and “Kate ate the
pizza to you”. Sensible sentences implied motion either toward or aw
ay from the body, in both
concrete and abstract contexts. Importantly, to indicate sensibility judgments, participants pressed
a key located either near or far from their body. For both concrete and abstract sentences,
participants responded faster when th
e direction of motion implied in the sentence was congruent
with that of the required physical movement than when it was incongruent. This finding, the
Action
-
Sentence Compatibility Effect (ACE), implies that comprehension of written language
involves simu
lation of the actions depicted in the sentences.

Given such findings for written language stimuli, one may wonder what would happen in sign
language comprehension. Specifically, many sign language verbs encoding transfer of the type
studied by Glenberg & K
aschak (2002) are so
-
called directional verbs


they explicitly realise
directionality of motion through a corresponding movement of the hands through space. In order
to address this question, we have investigated ACE effects in Deaf bilinguals, testing fo
r effects
in both their L1 (video
-
recorded BSL) and L2 (written English).


Materials for the English study were taken from Glenberg & Kaschak (2002); BSL sentences
indicated transfer between the sign model and the participant, or between a third person and

the
participant (see Examples), including an equal number of directional verbs and non
-
directional
verbs. Participants came for two sessions, which differed only in the direction of yes response
(toward vs. away from the body. In each session they carried

out both the English and the BSL
version of the experiment).

In English, we replicated the ACE effect. Participants were faster when implied motion in the
sentence was congruent with their response direction (1289ms vs. 1378ms): Deaf bilinguals
simulate

the actions implied in written English sentences as they comprehend them. We did not
find any ACE effect in BSL (congruent vs. incongruent: 2411 vs. 2403ms), even for directional
verbs.

This is surprising because of the greater involvement of the body in
BSL and the visual iconicity
of motion events. One possible explanation for the lack on an ACE effect may be that the
involvement of the motor system in comprehension may be blocked by perceiving the physical
engagement of the motor articulators in sign la
nguage. It is also possible that event simulation is
limited to “impoverished” input contexts, e.g. written language presentation, in general. A richer,
multichannel presentation of language


particularly involving depictive, iconic representation


may n
ot rely on action simulation in comprehension. These results shed light on how motion
events are simulated during sign language comprehension, and how comprehending sign
language may modulate hand
-
specific action simulation.


Examples

(1) FUNDING ME 1
-
GRAN
T
-
2


I grant you the funding. (directional verb, abstract event)

(2) CARDS YOU DEAL ME


You deal me the cards. (non
-
directional verb, concrete event)

(3) DEGREE JAMES 3
-
AWARD
-
2


James awards the degree to you. (directional verb, abstra
ct event)

References


Barsalou, L.W. (2009). Simulation, situated conceptualization, and prediction. Philosophical
Transactions of the Royal Society of London: Biological Sciences, 364, 1281
-
1289.

Glenberg, A.M. & Kaschak, M.P. (2002). Grounding language

in action. Psychonomic Bulletin &
Review, 9(3), 558
-
565.

Meteyard, L., Rodriguez Cuadrado, S., Bahrami, B. & Vigliocco, G. (2012). Coming of age: a
review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788
-
804.


Email: <santiago@ugr.es>


------------------


Artificial Grammar Learning and the Primate Brain


Christopher I. Petkov

Newcastle University


Artificial Grammars (AGs) are designed to emulate certain structures in human language syntax.
An interesting ongoing empirical question is w
hich animal species can learn various levels of AG
structural complexity. Understanding this could clarify the evolutionary roots of human syntactic
complexity and facilitate the development of animal models to study language precursors at the
cell and mol
ecular levels. In this talk I will first describe the behavioral AG learning work that we
have been conducting with nonhuman primates. Here, I will propose a quantitative approach to
relate our results to those that have been conducted with other animal sp
ecies and/or with
different AG structures. Then I will describe functional neuroimaging results on the brain regions
that appear to be involved in AG learning in the nonhuman primates. I conclude by overviewing
work that is underway to understand some of t
hese processes at the neuronal level.


Email: chris.petkov@ncl.ac.uk


--------------------


Feeling the Invisible: the Emotional Grounding of Abstract Concepts


Marta Ponari

(authors are:
Marta Ponari,

Gabriella Vigliocco, David Vinson, Andrew Anderson,
W
illiam Ratoff, Sze Long Lau, Matilde Vaghi, Bahador Bahrami

University College London






POSTER


Mastering the meaning of abstract concepts such as “algebra” or “stigma” underscores human
progress. Traditionally, this ability has been linked to linguist
ic skills. However, recent work has
led to the hypothesis that abstract concepts in the brain are shaped by the ecological distribution
of emotional associations: concrete concepts have more sensori
-
motor associations, whereas
abstract concepts have more a
ffective associations. Here we assess whether emotional
associations of abstract but not concrete words could be available preconsciously, as for other
evolutionarily relevant stimuli like faces. We used continuous flash suppression, a variant of
binocular

rivalry to render stimuli invisible, and measured how long it takes them to become
visible (Time to Emerge, T2E). First, we measured T2E measured for negative, positive or neutral
words. We found that negative words emerged from suppression more slowly th
an neutral and
positive words. In Experiments 2 and 3, then, we contrasted well
-
matched negative or neutral
abstract and concrete words. Here we found that negative abstract words took longer to emerge
than well
-
matched neutral and/or concrete words, confi
rming a pivotal role of emotional
associations in preconscious processing of abstract, but not concrete, words.

These results go against the prevalent view that abstract words, as linguistic symbols, only
undergo linguistic processing; moreover, they pose

novel constraints on the neural networks
argued to be engaged in the preconscious processing of emotional associations.


Email: <santiago@ugr.es>


------------------



How Neurons Make Meaning: Brain Mechanisms for Embodied and


Disembodied Semantics



Fr
iedemann Pulvermüller


Brain Language Laboratory, Freie Universität Berlin



How brain structures and neuron circuits mechanistically underpin


symbolic meaning has recently been elucidated by neuroimaging,


neuropsychological and neurocomputational resear
ch. Modality
-
specific


“embodied” mechanisms anchored in sensorimotor systems appear to


be relevant as are “disembodied” mechanisms in multimodal areas.


Four semantic mechanisms are proposed and spelt out at the level of


neuronal circuits: referential s
emantics establishing links between


symbols and the objects and actions they are used to speak about,


combinatorial semantics allowing the learning of symbolic meaning


from context, emotional
-
affective semantics establishing links


between signs and int
ernal states of the body, and abstraction


mechanisms generalizing over a range of instances of semantic


meaning.


Email:
<friedemann.pulvermuller@fu
-
berlin.de>


------------------


Should I Move My Leg to Learn “to Kick” in a Foreign Language?

The Role
of Simulation During Language Learning in a Virtual Environment


Claudia Repetto
, Barbara Colombo, Giuseppe Riva

Dept. of Psychology, Catholic University of Sacred Heart, Milan, Italy


The present paper describes a study performed to assess if the motor s
imulation process is
involved during verbal learning.

To be more specific, we focused on the role of simulation in second language learning, using a virtual
environment. Participants had to explore a virtual park (thus performing virtual feet
-
legs movemen
ts,
required to walk or run) while learning 15 new verbs (action verbs, describing movements performed with
either the hand or the foot and abstract verbs) in Czech language. This learning condition was compared to a
baseline condition, in which movements
(neither virtual nor real) were allowed. The goal was to investigate
whether the virtual action (performed with the feet) would promote or interfere with the learning of verbs
describing actions performed with the same or a different effector. The number o
f verbs correctly
remembered in a free recall task was computed, along with reaction times and number of errors during a
recognition task. Results underlined that simulation per se has no effect in verbal learning, but it is mediated
by the features of the

virtual experience, which induce different level of presence.


Email: <Claudia.Repetto@unicatt.it>


--------------------



How Language Incorporates Affordances


Lucia Riggio

University of Parma, Italy


The notion of affordances or micro
-
affordances is ce
ntral for embodied cognition. Visual objects that have the
potential to be manipulated elicit motor programs associated with their actual manipulation. More specifically,
pragmatic features or affordances of objects, such as size, orientation or location,
activate a set of potential
hand movements associated with specific actions, such as reaching or grasping. There has also been growing
evidence supporting the notion that language comprehension relies on the recruitment of the neural systems
actually used
for perception and action, i.e. that language understanding is embodied. However the research
has mainly focused on verb processing and much less on noun processing. Within this framework I will review
recent data according to two different but complementa
ry issues: 1. Whether nouns modulate the motor system
as it is for verbs
, examining the direction influence and the temporal parameters; 2. Whether nouns elicit the
same motor information as their external referents or whether some differences are presen
t. Particularly
relevant in this respect is a recent distinction between stable and temporary affordances. Stable affordances
reflect those features that can be stored, remaining almost constant in the interaction with objects across
different contexts. Te
mporary affordances are properties that are context
-
dependent, based on the way the
object is presented to the observer.

Email: riggio@unipr.it


The Left
-
Right Mental Time Line in Sign Language: Language vs. Experience

Julio Santiago


(authors are: Santia
go, Julio; Fox, Neil; Vinson, David; Vigliocco, Gabriella)

University College London; University of Granada


POSTER


Not a single oral language in the world refers to the left
-
right spatial dimension when talking
about time. Yet literate speakers do concep
tualize time as flowing leftwards or rightwards
(depending on writing direction), as shown by the left
-
right space
-
time congruency effect. In
contrast, signed languages do conventionally use the left
-
right axis to deploy the temporal
unfolding of events. T
hus, they provide a rare occasion to test whether conventionalizing a
particular conceptual mapping in language, over and above non
-
linguistic conventions such as
writing direction, affects the way that mapping is used or the strength with which it is acti
vated.
In the present study we assessed the left
-
right space
-
time congruency effect in a sample of users
of British Sign Language, and compared it to that observed in Spanish speakers. In spite of the
extreme differences in conventionalization of the left
-
right time line in both languages (highly
conventional in BSL).


Email:
santiago@ugr.es


------------------

Interlocutors and Empathy in Abstract Conceptualization


Theresa S. S. Schilhab



POSTER

University of Aarhus
, Denmark



Within sociology, ‘interactional expertise’ denotes expert knowledge acquired through
immersion in a linguistic community
without

direct experiences of the practice to which the
knowledge refers (e.g. Collins 2004).The existence of interaction
al expertise suggests that people
lacking the embodied expertise of a practice


might still talk about that skill as if they possessed
the embodied skills.


I will argue that like acquisition of interactional expertise, most academic knowledge acquisitio
n
lacks in direct experiences, is therefore ‘abstract’ and highly dependent on linguistic exhanges
(e.g. Borghi et al., 2011; Schilhab, 2011). In opposition, acquisition of concrete knowledge
-

when defining ‘red’ by pointing out red objects, is obtained b
y ostensive learning and perceptual
cues.

Thus, knowledge of imperceptible (i.e. absent) referents such as dinosaurs, medieval times or
desire, is acquired primarily through conversations. The absence of direct experiences, forces the
interlocutor to, thr
ough language, attain comprehensibility by eliciting mental imagery in the
child (e.g. Moulton & Kosslyn, 2009).


Drawing upon ‘grounded cognition’ studies in contemporary neuroscience and cognitive
psychology, I propose that the success of abstract knowl
edge acquisition crucially depends on
empathic skills of the interlocutor in the sense of tuning into the other person's thoughts and
feelings. Why?


First, mental imagery during conversation unavoidably builds on the ability of the adult
interlocutor to

gauge the level of comprehensibility in the learner. His or her inevitable
assignment in conversation is to appropriately monitor the linguistic maturity and level of
understanding in the child.


Second, for abstract knowledge to be conveyed, the interl
ocutor will need to establish metaphors
or phrases that immediately capture the concrete meaning of the abstract knowledge. Just as the
adult in ostensive learning furnishes the on
-
line world, for instance holds up a cup, points to the
cup and exclaims ‘cu
p’ (e.g. Pulvermüller, 2011), the interlocutor furnishes the off
-
line world. He
or she seeks mutual comprehensibility and makes mental tableaus that are thought to match the
understanding of the child.

Third, despite exchanges of questions and answers the

adult takes on the leading role to attain
mutual comprehensibility. In this endeavor, the interlocutor must refrain from imposing his or her
own level of understanding on the listener. At the same time, he or she must sustain his or her
own (expert) under
standing while remaining sensitive to the level and quality of the imagination
of the child.


Borghi, A. M., Flumini, A., Cimatti, F., Marocco, D., & Scorolli, C. (2011).

Manipulating objects and telling words: a study on concrete and abstract words

acqui
sition.
Frontiers in Psychology, 2
, 1
-
14.

Collins, H. (2004). Interactional expertise as a third kind of knowledge.
Phenomenology and the
Cognitive Sciences, 3
, 125
-
143.

Moulton, S. T & Kosslyn, S. M. (2009). Imagining predictions: mental imgery as menta
l
emulation.
Phil. Trans. R. Soc. B., 364,

1273
-
1280.

Pulvermüller, F. (2011). Meaning and the brain: The neurosemantics of referential, interactive
and combinatorial knowledge.
Journal of Neurolinguistics
. doi:
10.1016/j.jneuroling.2011.03004

Schilhab, T.

(2011). Derived embodiment and imaginative capacities in interactional expertise.
Phenomenology and the Cognitive Sciences
. Doi: 10.1007/s11097
-
011
-
9232
-
0


Email: tsc@dpu.dk



Is there an effector
-
specific involvement of motor cortex in the comprehension
of
meaningful words? A TMS study


POSTER

Malte Schomers, Friedemann Pulvermüller

Brain Language Laboratory, Freie Universität Berlin

Whether the speech motor system is involved in la
nguage comprehension is currently a matter of
intense debate. Numerous fMRI studies have shown activity in motor cortex during passive
perception of syllables (e.g. Wilson et al., 2004; Pulvermüller et al., 2006). Furthermore, a TMS
study by D’Ausilio et a
l. (2009) showed that TMS to either the lip or tongue representation could
selectively facilitate perceptual processing of syllable
-
initial lip
-
related ([b]/[p]) and tongue
-
related ([d]/[t]) phonemes overlaid with noise and presented in an explicit phonolo
gical
classification task. These previous studies had been criticized because explicit phoneme
classification may reflect decision processes independent from speech comprehension so that
post
-
understanding response bias may be an issue (see, e.g., Venezia
et al., 2012). A further
question relates to noise overlay, as perceptual classification deficits seem to vanish without noise
(D’Ausilio et al., 2012). Finally, speech classification and perception tasks may be considered not
ecologically valid, or “unnat
ural”, as the normal function of language is to convey meaning.

To clarify the functional contribution of articulatory motor cortex on speech comprehension, we
here presented words in a single word comprehension task and probed the effect of TMS
stimulati
on to tongue and lip motor cortex. Subjects heard meaningful words, presented without
noise, starting with either “lip phonemes” ([b]/[p]) or “tongue phonemes” ([d]/[t]), and carried
out a word
-
to
-
picture matching task immediately after auditory presentat
ion of the word. In
agreement with the previous study by D’Ausilio et al. (2009), we found significant effects
suggesting that TMS stimulation of motor cortex can modulate speech comprehension in an
effector
-
specific manner.



References:

Wilson, S.M. (200
4). Listening to speech activates motor areas involved in speech production.
Nat. Neurosci. 7, 701

702.

Pulvermüller, F., Huss, M., Kherif, F., Moscoso Del Prado Martin, F., Hauk, O., and Shtyrov, Y.
(2006). Motor Cortex Maps Articulatory Features of Speec
h Sounds. Proc. Natl. Acad. Sci. 103,
7865

7870.

D’Ausilio, A., Pulvermüller, F., Salmas, P., Bufalari, I., Begliomini, C., and Fadiga, L. (2009).
The motor somatotopy of speech perception. Curr. Biol. 19, 381

385.

D’Ausilio, A., Bufalari, I., Salmas, P.,
and Fadiga, L. (2012). The role of the motor system in
discriminating normal and degraded speech sounds. Cortex 48, 882

887.

Venezia, J.H., Saberi, K., Chubb, C., and Hickok, G. (2012). Response bias modulates the speech
motor system during syllable discri
mination. Front. Psychol. 3.


Email: Malte Schomers m.schomers@gmx.de


------------------


Does Verbal Categorization Require Embodied Motor Representations?


Olga Shcherbakova
, Ivan Gorbunov, Irina Golovanova PO
STER

Saint Petersburg State University


Verbal categorization is considered to be one of the most complicated cognitive skills.

Cognitive psychologists (following the ideas of L.S. Vygotsky, J. Piaget and A.R. Luria) usually

understand it in terms of con
ceptual thinking which is the highest form of intelligence, has social

and cultural origins and is being trained during childhood and adolescence. Verbal categorization

also tends to be understood in terms of abstract mental operations


as an amodal and

“pure”

thinking process. But there are some evidences that abstractiveness in mental processing is
enrooted within the bodily and motor experience (Lacoff, Johnson, 1999, 2002; Fenici, 2011).

We examined functional brain state dynamics while solving var
ious verbal tasks: “Combining 3

concrete concepts into 1 generalized” (CC), “Metagrams solving” (MS), “Giving reasons for

opposite statements” (OS). We supposed that functional state patterns are different in performing

various types of verbal tasks req
uiring various kinds of mental operations. We were also
interested in finding out the EEG
-
patterns underlying the verbal categorization operations.

34 volunteers participated after informed consent, male and female, aged 17


33. EEG activity

while verba
l tasks solving was monitored over 19 scalp locations. The 19 EEG traces were
digitized online at 250 Hz, 2244 EEG tests, 1122 responses to intellectual tasks were registered.

Statistically significant dynamics of the EEG power were revealed in three type
s of verbal tasks

solving.

The most interesting result was the increase of delta
-
activity in the left parietal and

occipital areas as well as in the left frontal area (for CC
-
task). The decrease of delta
-
activity was

shown in central parietal, occipit
al and right temporal areas. Also, the increase of delta
-
activity

while OS
-
task solving was revealed in all locations. So, there are some dynamics in subcortical

activity while solving the tasks requiring categorization and abstract thinking skills. Subc
ortical

activation in the right temporal area could be considered the correlate of processing the core,

conceptual word’s characteristics. As the occipital and parietal areas are the neural basis for the

body schema and spatial perception, there is a qu
estion about what their role in the abstract
mental processing is. We suppose the verbal categorization and abstract operations are not
“digital” but embodied and involve some motor representations which underlie conceptual
thinking.


Email: o.scherbakova
@gmail.com


--------------------------------------


Rapid and automatic activation and inhibition of cortical motor system in spoken word
comprehension

Yury Shtyrov
1
,
*, Anna Butorina
2
, Anastasia Nikolaeva
2
, Tatyana Stroganova
2


Medical Research Council Cog
nition & Brain Sciences Unit, Cambridge, UK

MEG Centre, Moscow University for Psychology & Education, Russia


Perception and action are functionally linked in the brain, but a hotly debated question is whether
and to what extent cortical motor circuits are

immediately involved in the perception and
comprehension of external information, or whether their activation in perceptual tasks is a
secondary post
-
comprehension phenomenon. To address this, we used MEG in combination with
individual MR images to invest
igate the time course and neuroanatomical substrates of
activations elicited in the human brain by action
-
related verbs and nouns, which were presented
auditorily outside the focus of attention under a non
-
linguistic visual distractor task. We found
that v
ery early on in the course of perception


starting from about 80 ms after the information
was available in the auditory input


both verbs and nouns produced characteristic somatotopic
activations in cortical motor areas, with words related to different b
ody parts activating the
corresponding body representation in the motor cortex (confirmed through a motor localiser task).
Moreover, near
-
simultaneously with this category
-
specific activation we observed suppression of
motor
-
cortex activation by competitor

words with incompatible action semantics, for the first
time documenting operation of the neurophysiological principle of lateral inhibition in neural
word processing. The extremely rapid speed of these activations and deactivations, their
emergence in th
e absence of attention and their similar presence for words of different lexical
classes testify, in our view, to automatic involvement of motor
-
specific circuits in the perception
of action
-
related language.


Email: yury.shtyrov@mrc
-
cbu.cam.ac.uk



Langua
ge and Emotion: How semantics is grounded deep inside the body


Gabriella Vigliocco

University College London


Embodied research has typically considered how language (especially words) links to our
sensory
-
motor experience. Our work, however, has demonstr
ated that we need to move beyond
sensory
-
motor experience in order to better understand how we acquire knowledge referring to
more abstract concepts and words. In particular we have developed the hypothesis that knowledge
of abstract concepts and words is
rooted in our affective experience. In the talk, I will report on a
series of experiments that investigate preconscious processing of words and faces and shows how
affective connotation of words, especially abstract words, is preconsciously processed


pre
sumably by the same systems that process emotion in other evolutionarily important stimuli
such as faces. Moreover, these effects are robust in subjects’ first and second language. We take
these results to support views in which the emotion system provides

grounding to semantic and
linguistic representations. They further provide clear constraints to any hypothesis concerning
neural and psychological mechanisms for extracting affective associations from social stimuli
(faces and words).


Email:

g.vigliocco@ucl.ac.uk


------------------


Bilingual language processing and evidence for automatic perceptual simulation during
sentence comprehension



Nikola Vukovic,
John N. Williams

University of Cambridge

Evidence

from behavioral and neuroimaging studies strongly supports the claim that, when
understanding language, people perform mental simulation using those parts of the brain which
support sensation, action, and emotion. Several studies within the embodied cogni
tion framework
also suggest that comprehending sentences involves building situation models of the sentential
content, including detailed visual information. It is debatable, however, to what an extent this is
an automatic process.

Modifying a classic sent
ence
-
picture matching task, the current study presents novel evidence
that simulated mental representations are highly automatic indeed, and qualitatively
analog

to
modal states involved in actual vision and perception. We exploit the well
-
known fact that
bilinguals routinely and automatically activate both their languages during comprehension to test
whether this automatic process is, in turn, modulated by embodied simulatory processes. Dutch
speakers of English heard sentences in their second language whi
ch implied specific distance
relations, and had to subsequently respond to pictures of objects matching or mismatching this
implied distance. Crucially, some of the English sentences contained words which sound similar
to unrelated Dutch object words. Part
icipants were significantly slower to reject pictures of these
objects when their perceptual features matched the distance relationship implied by the sentence.
The same effect was not found for pictures of unrelated control objects. These results suggest
that
bilinguals not only activate task
-
irrelevant meanings of interlingual homophones, but also
automatically inflect this meaning in a detailed perceptual fashion consistent with implied
sentential content.

The present study provides novel evidence for em
bodied semantics and is, to our knowledge, the
first to successfully test the nature of non
-
selective meaning access in bilinguals through methods
developed in embodied cognition research.

Email:
<vukovicnikola@gmail.com>