Draft Functional Specification for the Callimuse System

terrificbeepedMobile - Wireless

Dec 10, 2013 (3 years and 4 months ago)

60 views

Draft Functional Specification for the Callimuse
1

System

CSC354 Introduction to Software Engineering, Dr. Dale Parson, Fall 2013

.

1. Introduction by the Instructor (Customer)

This is an outline document that addresses some of the functionality left open b
y the
requirements document. This is an attempt to nail down some important usage requirements.
Playing this system should feel like playing an instrument such as a violin. It should not be
necessary to “tune the violin” by escaping to GUI
-
like modal panel
s during play. It should be
possible to sit back, watch the dome, and stroke in ideograms. It is OK to use modal GUI panels
for initial configuration and for infrequent setting changes.

See
http://faculty.kutztown.edu/parson/fall2013/RequirementsCallimuse.pdf

for the requirements
specification. The present, functional specification is not comprehensive. It builds on, constrains,
and resolves issues left open by the require
ments spec.

2. System Level Requirements

Figure 1 is a hybrid UML
Deployment

Diagram and Dataflow Diagram
.


As noted in the diagram, there are five categories of data in a message going from a given
Android UI device to the graphical server.




1


“Callimuse” is for “beautiful music” or perhaps “beautiful muse,” informally known as the “Schmutz System.”

A
stroke cent
roid

is essentially the center point of a stroke in the
unit circle

(using polar
geometry coordinates) that outlines the dome. The Android tablet will send 1
-
pixel
-
wide stroke
coordinates to the graphics server. This approach is a trade
-
off. It minimizes
Android
algorithm
complexity, and it trades size versus graphical fidelity in going to the server. All final rendering
including features such as
brush stroke
, along with
ideogram recognition
, occur on the server.
The tablet basically sends stroke informat
ion to the server. Display on the tablet should be
relatively quick and simple.

Since Android cannot sense finger pressure
, the tablet software will derive
pressure

from
speed
.
Slower drawing “drops more paint” and hence maps to more pressure. Each centroi
d pixel is
pairs with its pressure in the OSC message. OSC necessitates using “parallel arrays” to pass three
coordinates per centroid pixel, namely,
radius[]

between 0.0 (center) and 1.0 (edge of dome),
angle
theta[]

in the range [0.0, 2pi) radians, and v
irtual
pressure[]

for each centroid pixel.

Other parameters
color
,
brush

stroke
,
momentum

and
spin

apply to an entire ideogram. Color
requires having a palette of highly contrasting colors available on the tablet for fast color
changes. Stroke type require
s a second palette of line
-
drawing types. Momentum and spin
require an additional, momentum
-
and
-
spin stroke. The ideogram strokes must display on the
dome in real time, as a player draws them, so a given ideogram will display statically until its
player fo
llows up with a momentum
-
spin stroke that does not draw brush strokes, but rather sets
an unmoving ideogram that just appeared on the dome into motion. Tablet
ID
-
ideogramSerialNumber

tags are probably necessary in each OSC message, so that the graphical
se
rver knows which ideogram gets which momentum data in the follow
-
up momentum message.

The server will send OSC/UDP
broadcast

datagrams

if possible to emit a reduced copy of the
incoming stroke data (
average pressure
,
color
,
strokeType
,
momentum

and
spin
, b
ut
not

centroid pixels in the initial implementation), along with current
location

on the dome (polar
coordinates),
orientation

on the dome (resulting from spin), closest matching
ideogram
, and
intensity

[0.0, 1.0] for when the server fades ideograms away
incrementally.

The
MIDI generator

and
MIDI renderer

components will typically reside on a single
computer, but not necessarily so. The former turns OSC broadcast data into MIDI note and
control data, and the latter converts it into sound.

3. Mobile User I
nterface Requirements

This discussion centers around the user interface (UI) on the tablet. Here is a figure from the
Mobile tablet Team’s Requirements Specification.


During normal play there will be a circle on the Android UI into which a player can str
oke
ideogram strokes and momentum/spin strokes. The player will also be able to zoom in and out on
sections of the overall planetarium by sweeping two fingers apart to zoom in, and together to
zoom out, as is typical for these devices. There will be a thum
bnail to the side that shows the
entire dome, along with a smaller embedded circle that shows the current zoom area. Finally,
there will be two palettes, a color palette of distinct, high contrast colors, and a stroke palette of
line stroke types. The plan
etarium needs relatively basic, high
-
contrast colors. It is OK to have
pop
-
up panels for entering configuration data, but using them should be an exception, like tuning
a guitar. You do not normally tune a guitar while playing. Also, UIs with many modes do

not
suit real
-
time performance very well. We will avoid them. It will be necessary to have each
player stroke in orientation with respect to North
-
East
-
South
-
West in the room, for the direction
the placer is facing, before interacting with the server. App
roximate orientations are labeled
around the dome base in the room. Presumably a player may spend most of her/his time zoomed
into the region that she/he is facing directly. We will have a second palette for setting a
stroke
type

(
brush style
)

for the grap
hics server
; see that discussion below.


4. Networked Communication Requirements

The above sections cover many of the OSC/UDP datagram requirements. We will test with four
to six players. We will plan to use a wireless LAN,
probably using a laptop or desk
top machine
as the ad hoc router instead of using a wireless router
. Experiments in an independent study show
excessive latency when using a router. We can experiment. In any case, using a router versus an
ad hoc LAN routed by a computer is mostly invisibl
e to the OSC/UDP protocol.

One feature we need to add is a periodic broadcast via UDP (possibly OSC/UDP) from the server
giving its IP address on the LAN and its port. Mobile devices can then join and leave a session
without requiring typing setup. We have

example code for using UDP broadcast datagrams to
announce an RMI server for HexAtom that we can adapt to announcing an OSC./UDP server on
the LAN periodically.

An OSC message consists of an
address string

followed by an
array of objects
, where the
addres
s string is an application
-
oriented keyword starting with a forward slash, such as
/stroke
,
/momentum

(e.g., from an Android) or
/ideogram

(from the server), followed by the
array of
objects

that comprise arguments for that string. Each entry in the array
can be a
String
,
Integer
,
Float
,
Boolean
, or an
array of one of these types
. The position in the outer array defines which
argument it is, just like passing parameters to a function call. I have not tested passing an array as
an argument with our Java libr
ary yet. I will go over an OSC demo in class.

The next big job for this team is getting an
application
-
level protocol for of types of OSC
messages
, i.e.,
address strings

and their
array of parameter types
.

5. Planetarium Graphical Engine Requirements

The
graphics server team did a good job of partitioning the work into distinct, complementary
features in the requirements spec. rather than repeat details, I am going to nail down a few details
here. Many of the features in the requirements spec are feasible.

My advice is to start simple,
then extend later if you get time.

The graphics engine must accept data from Android users and render it onto the dome. There is
more than just
stroke plotting

to do. There is
location
,
momentum
,
spin
,
bouncing

off the
edge v
ersus
disappearing
,
evaporating

over time,
brush stroke styling

(there will be a palette
for brush style in the UI that passes this info to the graphics server),
ideogram recognition
, and
generation of broadcast OSC messages

to convey these data to the MID
I generator.

There is perhaps more to do here than in any other component of the system. We may have to
steal someone from another team.


Here are the ideograms that we will use. The names are suggestive, not exact.


Brook
, as in a small stream, topologi
cally a stroke that never crosses itself.


Stream
, as in a
substantial stream, topologically two

stroke
s

made with two fingers
.


River
,
topologically three

stroke
s

made with three

fingers
.


Earth
,
any ideogram that crosses itself once
.




Sky
,
any ideo
gram that crosses itself twice
.


Travel
,
any ideogram that crosses itself three times
.


Wormhole
,
topologically = brook, but spirals clockwise or counterclockwise at least 360
o


Human
,
two distinct Earths with a continuously connecting brook
.

The descri
ptions under the figures outline how to detect each ideogram. Orientation in space,
angular versus smooth, etc.
do not matter
. What matters is mostly topology


how many
strokes, and how many crossings. Only brook and wormhole are identical with respect to

topology. Distinguishing wormhole requires a little geometry.

6. Planetarium
Audio Engine Requirements

As noted earlier in this document, the plan is to make “play” like instrument play for the humans.
A player sketches on the dome using strokes that map
to music. We want to minimize the control
aspect of the human
-
system interface to setup time and to one foot controller run by the
“conductor” for the session. Otherwise, the system should feel like playing instruments.

There are
location
,
momentum
,
spin
,
disappearing

/
evaporating
,
color
,
brush stroke styling
,
and
ideogra
ms

data coming from the graphics engine. There is a lot of leeway in how the two
MIDI processes of the Deployment / Dataflow Diagram above can render these data. Here is a
subset that I am

specifying.

Location

maps to
section of the virtual orchestra
. As an ideogram moves from one of five
“quadrants” to another, where a quadrant is the region closest to one of the five speakers, its
effect on its prior quadrant fades out, and its effect on
its new quadrant increases. Signal level
changes should cross fade. They should not be abrupt.

Each
section of the virtual orchestra

has up to 3 voices (MIDI channels) giving us up to 15
voices for the total of 5 speakers. We will use the 16
th

MIDI channel

for broadcasting to all, to be
used for ambient pads and other sounds with poor spatial properties. This team needs to design
its five sections of the orchestra, come up with instrument sounds that work well together, and
complement the capabilities of th
e other sections. Melodic, percussive, rhythmic / beat oriented
(tonal percussion), and background chording are rough ideas for sections. The audio team must
flesh this aspect of the system out.

Some of the properties such as color could map to MIDI
-
contro
lled FX such as reverb, chorus,
flanging, phasing, granulation, etc. Evaporation / disappearance maps to signal level. This team
will flesh these matters out.

The diagram of the dome appears on the next page. The Ableton Live software that comprises
the ou
tput stage of the MIDI Renderer in the Deployment / dataflow diagram has only stereo at
its main output, but it has Send / Return FX for which we can use the Sends to create spatial
sounds. Those are five speakers, not five pairs. There are ways to mix eac
h speaker’s Send in a
Live channel. We probably want to use the Main Stereo Out for headphone monitoring and
debugging. We’ll need to mix at least some subset of the sections for the subwoofer. Maybe
there will be a bass section of the orchestra.