About the Reenactment of Digitized Sound and Music

gamgutturalMobile - Wireless

Dec 10, 2013 (3 years and 7 months ago)

70 views

About the Reenactment of
Digitized Sound and Music
by Playful Interaction Scenarios
Plateformes mobiles pour de nouvelles approches créatives de la musique
GRAME, October 15, 2012
Norbert Schnell
IMTR, IRCAM – Centre Pompidou
Music
making
and
listening
yesterday ...
... and
today
Replaying Music
... every
piece
is a potential instrument
animating
recorded
audio
analyzing and relating
action and sound
morphologies
designing
playing techniques
and scenarios
real-time
captured
bodily action
digitized
sound
materials
interactive
real-time
processes
Ingredients
... and recipes
design of playing
techniques & scenarios
interactive
sound & music processes
sound
materials
Atelier des Feuillantines, IRCAM IMTR, 2008
Atelier des Feuillantines, IRCAM IMTR, 2009
gesture capture
gesture annotation
recognition / segmentation
score
recorded sound
alignment
/ annotation
Atelier des Feuillantines, IRCAM IMTR, 2010
Party in the USA
IRCAM IMTR, VoxLer, 2011
(Re-)Making Sound
Every
Sound
(
Environment
) is a Potential Instrument
Grainstick
Pierre Jodlowski, IRCAM, 2010
Interlude
IRCAM IMTR, NoDesign, GRAME
et al.
2011
Topophonie Mobile
Orbe, IRCAM IMTR, ENSCI, 2011
MindBox
C. Graupner, R. Zappala, N.Schnell, 2009/2010
MindBox
C. Graupner, R. Zappala, N.Schnell, 2009/2010
Breaking
and
Remaking
Time

Respect
and
adapt

to interaction

Eliminate
and
recreate from scratch
according to interaction

Model

respect

recreate

according to sound
and
interaction

rules, mechanics, topologies
The Beat-Boxer Case
Interlude
IRCAM IMTR, NoDesign, GRAME et al. 2011
Mogees
IRCAM IMTR, Goldsmiths, 2011/2012
Dirty Tangible Interfaces
User Studio, IRCAM IMTR, 2012
Playing
on
and
with
Anything
... every
object
is a potential instrument
The MO-Kitchen
IRCAM IMTR, 2011
Urban Musical Game – Early Tests
IRCAM, NoDesign, Phonotonic, 2011
Urban Musical Game – Festival Futur en Seine
IRCAM, NoDesign, Phonotonic, 2011
Finally...
... every
gesture

defines
a potential instrument
Atelier des Feuillantines, IRCAM IMTR, 2007
Creating
Instruments
by Listening

From supporting
actions causing sound ...

... to supporting
listening as an activity

Listening is an
activity

Listening is the
primary
activity in
playing

Let the instrument
act
simulation, automation, animation, ...

Let the instrument

learn
Interactive Sound Reproduction

Phonograph
and
HiFi
-
chose record
-
start/stop record
-
adjust volume
-
create and explore listening situation
-
perform (
hip-hop
) and compose (
musique concrète
)

Spatial audio (binaural, WFS, Ambisonics)
-
explore virtual
scenes

Reenactment
-
explore virtual
actors
* explore
 “
engage into interaction with”
Beyond Stream Processing
Interactive Audio System
(of
today)
performer
interface
computer
sensor
data
streams
sound
streams

Interactive Audio System
(of
today)
computer
sensor
data
streams
sound
streams

Interactive Audio System
(of
today)
mapping
control
parameter
streams
synthesis
parameter
streams
analysis
sensor
data
streams
synthesis
sound
streams

Interactive Audio System
(of
today)
afferent
stream
processing
sensor
data
streams
efferent
stream
processing
sound
streams
analysis
anticipation
learning
planning
synthesis
of
action
and
sound
IMTR Tools
Constructing a
Memory
of
Action
and
Sound
MuBu
Data Container

MuBu
container
-
continuous
audio
,
motion
, and
descriptor
data
-
segmentation
and
classification

External analysis and annotation tools
-
loading
SDIF
and
text
files

Visualization and editors
PiPo
Plugin Interface

Processing of
afferent
data streams
-
analysis/reduction of audio and motion data streams
-
time-tagged
or regularly
sampled
streams of
scalars
,
vectors
, or
matrices
-
filtering, descriptor extraction, segmentation

Abstract C class
-
propagate
stream attributes
(time-tags, frame rate and dimensions, etc.)
-
reset
stream processing
-
propagate
frames
-
finalize
stream processing

Hosts
-
online
and
offline, synchronous
and
asynchronous
processing
-
chain
PiPo
plugins (e.g. “
slice:fft:bands:dct”
)
-
first integrations (standard Max/MSP externals,
MuBu for Max/MSP
,
IAE)
ZsaZsa
Content-based Synthesis

Connecting to
MuBu
container
-
audio data
-
segmentation
-
description

Time-domain overlap-add synthesis
-
asynchronous
granular
(textures)
-
synchronous
granular
(voice, harmonic sounds)
-
concatenative

(attacks, transients, beats)

Synchronous scheduling (callbacks)
Time and Data Models

kD-tree
-
k-NN unit selection

GF
gesture follower
-
gesture and audio
recognition
-
gesture and audio
following
IAE
IMTR Audio Engine

Features
-
MuBu
container
-
PiPo
host and plugins
(audio descriptor extraction and segmentation)
-
ZsaZsa
overlap-add synthesis
-
kD-tree
-
GF

Platforms
-
Max/MSP externals (MuBu
&
GF
for Max/MSP
)
-
Unity 3D
plugin
-
Mac OS X / Windows (beta release planned for 2013)
-
iOS (development platform in collaboration with
Orbe/ENSCI
)
What’s Next
?
IAE
Wish List

GF II
, multimodal
sound
and
action
modeling

Action
models
(
physics simulation
,
automata
, etc.)

Hybrid
synthesis

(
granular/concatenative

additive
)

Redefining the
rapid prototyping
environment
Pragmatics
of Rapid Prototyping

Change program while running
(or
quasi
)

Mix programming
levels
-
build and use graphical control
interface

(buttons and sliders)
-
create, store, recall, and integrate
presets
-
create, connect, and change C/C
building blocks

Mix
platforms
(e.g.
Max/MSP
,
Unity 3D
,
iOS applications
)
-
inter-platform
communication

(
parameters
in OSC)
-
inter-platform
file exchange

(
data
,
presets
, and
code
in SDIF, text, XML)

Stay
independent
and accumulate libraries
... of easily reusable code
(C/C, JavaScript, C)
Solutions
?

Max/MSP
with integrated
CLang
compilation

Unity 3D
with
JavaScript
and
C
scripting

Xcode
4 and iOS

JavaScript/HTML
WebAudio
(ANR project
Wave,
2012-2014)
Including work by and in collaboration with...
Frederic Bevilacqua
(IMTR, IRCAM – Centre Pompidou)
Riccardo Borghesi
(IMTR, IRCAM – Centre Pompidou)
Diemo Schwarz
(IMTR, IRCAM – Centre Pompidou)
Julien Bloit
(IMTR, IRCAM – Centre Pompidou)
Jean-Philippe Lambert
(IMTR, IRCAM – Centre Pompidou)
Emmanuel Flety
(IMTR, IRCAM – Centre Pompidou)
Baptiste Caramiaux
(IRCAM IMTR / Goldsmiths, London)
Bruno Zamborlin
(IRCAM IMTR / Goldsmiths, London)
Fabrice Guedy
(Atelier des Feuillantines, Paris)
Nicolas Rasamimanana
(Phonotonic, Paris)
Interlude
project partners
(especially GRAME, NoDesign, Atelier des Feuillantines)
Topophonie
project partners
(especially ENSCI, Orb)
Christian Graupner
(HUMATIC, Berlin)