Developing Software Synthesizers

agerasiaetherealAI and Robotics

Nov 24, 2013 (3 years and 6 months ago)

105 views

Developing Software Synthesizers



Dan Bogucki

Computer Technology

University of Wisconsin


Platteville



Abstract


With the recent surge in popularity of electronic dance music, one has to look at music
synthesizers which are an integral part in creating sounds,
and are
the backbone of this type of
music. Music production using digital techniques have been replacing h
ardware components
because they are much cheaper than their hardware predecessors, more versatile, and more
accurate. There are many different types of software synthesizers available to the public today.
Each offer different characteristics and options
to manipulate sound waves, but are all based off
the same primary principals. Software synthesizers have been developed that are still based off
hardware designs, but offer improved reliability
,
tuning, more options

to manipulate sound,

and
more waveforms

than their hardware counterparts
.
This paper will delve deeper into what makes
up the core of a software synthesizer and the different types of synthesizers available

as well as
provide background information on sound waves and history of synthesizers.





Introduction


Sound synthesis is the process of using electronics to create an electrical pressure sound wave
from scratch and then controlling and manipulating it [8]. Software

synthesizers have become

more advanced in recent years as well as more accessible. In addition, software synthesizers
have been able to mimic their hardware counterparts and improve on the design that was once
just designated to hardware components. Not only that, but through i
nnovation and
development, new software synthesis techniques have been created to offer users different ways
to manipulate sound waves to their preference. With the recent acceptance and popularity of
electronic danc
e music

software synthesizers will be u
sed more and become more advanced as
this music continues to grow.



Sound


Sound is moving energy that travels as a pattern of changing pressure [
8
]. You can also define
sound as

the perceived vibration or oscillation of air resulting from the vibration

of a sound
source


[
13
]. Your sound source can be anything, from a guitar sound board, to a speaker cone,
a vocal cord,
a
hair dryer,
or
a
car engine. A
nything that can produce sound is a sound source.


2


Sound Waves


As your sound source expands and cont
racts
,

a pattern of changing air pressure en
ergy moves
away from the source.

This is
known as
a sound wave [
8
]. There are four basic sound wave
forms, a sine wave, a square wave, a sawtooth wave, and a triangle wave.


Figure 1: Picture of the basic wave
forms



Every synthesizer will have these four basic wave forms that users will be able to manipulate. A
sine wave is created when a sound source expands and contracts absolutely consistently [
8
]. It is
said that the sound source is moving in simple har
monic motion [
8
]. This is not found in nature.
Sine waves can only be created by an electronic sound source, for example an oscillator you find
on a synthesizer. Sine waves are the building blocks of all complex sounds. This includes
square waves, sawt
ooth waves, and triangle waves.


You can think of all the
complex

wave forms
as being
made up of a bunch of individual sine
waves of differing frequencies which interact and interfere with each other. It is good to note
that each individual vibration, whi
ch is a sine wave, is known as a harmonic. Square waves are
mad
e up of at least 20 harmonic and
only contain odd harmonics as well. Triangle waves are
comprised of around 10 harmonics

which also only

contain odd harmonics. Sawtooth waves are
comprised o
f around 25 harmonics

but

include both even and odd harmonics, unlike the other
waves.


3


Sound Synthesizers


A sound synthesizer is an electronic instrument capable of producing and generating a great
variety of sounds by combining different frequencies [
8
]
. A synthesizer generates electric
signals
(
or wave forms as they are also
known
)

which are converted to sound through speakers.



History of Sound Synthesizers


Technology for the sound synthesizer first started getting developed in the late 1860s.
Hermann
von Helmholtz built a number of electro
-
mechanical oscillators to aid his research in human
perception of sound [
11
]. The devices he created just

generated simple sounds but
he laid the
groundwork for developing synthesizers later. Next, in 1876,

Elisha Gray created the musical
telegraph, which was based on the telephone technology [10]. In 1906 Lee de Forest invented
the vacuum
-
tube triode amplifier valve [
11
]. Engineers started to see the possibility of
using the
new technology

in creating ele
ctronic musical instruments.


It was
n’t

until the 1950s, when transistors became available
,

that there would be a breakthrough
in synthesis technology. Harald Bode created the Melochord in 1961 using this technology

[11]
.
It was the first v
oltage control
led synthesizer
. In 1964, Robert Moog constructed a transistor
voltage
-
controlled oscillator and amplifier

[11]
. A few years later h
e

created the first synthesizer
which was
called the Moog synthesizer
.

This
was the first synthesizer
that was made
availa
ble to
the public.


While inventors like Bode and Moog were creating hardware synthesizers, Max Mathews of Bell
Telephone Labs began exploring the use of computers to generate sound. He first created a
software program called MUSIC I in 19
57 followed by M
USIC II in 1958

[11]
. These programs
were written in assembler code and were written for the IBM 704 mainframe. In 1960, Mathews
created MUSIC III which ran on the 2
nd

generation IBM 7094

[11]
.

In 1962, alongside Joan
Miller, MUSIC IV was created, which was just an improved version of MUSIC III

[11]
. Again,
this was all done in assembler code. It was
n’t

until 1968, when MUSIC V was created
,

that
it
written in something other than assembler.
This time Matthews used FORTRAN

[11]
. He had
to reorganize the internal functions so he could overcome the inefficiencies of the language.


Also in 1968, Barry Vercoe developed MUSIC 360, which was a fast version of MUSIC IV for
the new generation of IBM
360 mainframes

[11]
. Then in 1973 Vercoe developed a compact
version of MUSIC called MUSIC11, because it was written in PDP
-
11 assembler code and was
specifically for the PDP
-
11 computer

[11]
. This was the first digital music synthesis program for
mini
-
c
omputers and was programed for use with a keyboard and a teletypewriter Visual Display
Unit. In 1979, the Australian Fairlight CMI synthesizer was introduced

[11]
. This synthesizer
had two 6
-
octave keyboards
, its own graphics unit, and

a typewriter termi
nal. The user was able
to do different synthesis methods like additive synthesis, subtractive synthesis, and sampling
synthesis

[11]
. Not only that, but
the user
were able to program the software
them
sel
ves
.




4


Hardware vs. Software Synthesizers


There
is a heavy debate as to whether one synthesiz
er is better than the other
. It is good to note
that there are many differences between hardware and software synthesizers. For example,
software synthesizers use
a
digital processor while hardware synthesizer
s use analog circuitry.
In addition, with computer technology rapidly advancing
,

so
are

software synthesizers. For the
price, it is possible for software programmers to offer more features for a give
n

price as well as
more customization. Also, hardware synthesizers generally can only do subtractive synthesis
,

so
s
a
mpling and additive synthesis are

just not feasible on hardware synthesizers. Even so, many
musicians prefer the character and sounds of a
nalog circuitry
,

so it really boils down to personal
preference.



Turning to Software Synthesizers


With personal computers being a main commodity in most households
,

anyone can use a
software synthesizer. Computers are faster, cheaper, and more reliable

than hardware
synthesizers and most popular sound cards today have the a
bility to emulate oscillators,
e
nvelopes, filters, frequency modulators, samplers, and any other feature the programmer wants
to include. With processors being faster and better than

ever, it’s possible to compute sounds in
real time now. In addition, new algorithms and more powerful software give desktop computers
the functionality of studio equipment.


New coding schemes for high quality audio and higher bitrates for file transmiss
ion means that
digital music recordings are freely available on the internet for anyone to access and download.
Many software synths to
day allow the user to connect

virtual synths in familiar and new ways for
endless customizing. With the technology out
there today, it’s really hard to tell the difference
between hardware and software synthesizer sounds. Also, software developers are able to
interact with

operating systems and applications to create sounds at a higher level in generic
terms, rather than
a hardware specific way so there is less of a limit on the programmer and user.

In addition, software is generally cheaper than the hardware version and you generally get more
value for your dollar.



Synthesizer Methodology


There are four main types of
synthesis methodology. They include additive synthesis,
subtractive synthesis, frequency modulation or FM synthesis, and wavetable synthesis. These
are not the only synthesis methodologies available but they are the ones users will come across
the most w
hen dealing with synthesizers.






5


Additive Synthesis


Additive synthesis is producing sounds by adding different waveforms together [
13
]. This
process is based on the Fourier theory. The Fourier theory can be described by considering

these
waveforms as blocks that

when put together, add up to a different waveform completely and
therefore a more complex sound can be obtained [
13
]. Dynamic changes in the waveform are
created by varying the relative amplitudes of as many as several doze
n waveforms. Additive
synthesis has the most potential to create sounds that can mimic musical instruments the best.



Subtractive

Synthesis


Subtractive synthesis produces sounds by generating a waveform that contains more har
monic
content than a sine wa
ve. T
hen the waveform passes through a filter which subtracts harmonics
to obtain

the desired sound [
13
]. It is basically the reverse of additive synthesis.



Frequency Modulation

Synthesis


Frequency m
odulation synthesis uses rich FM sidebands as
harmonics for the synthesized
waveform [
11
]. This FM technique is applied digitally through FM operators. An operator has a
digital waveform which is usually a sine waveform generator and an envelope. The output of
one operator is routed to modulate the

frequency of another operator. Modulation of one sine
wave by another waveform produces more complex sounds that are dependent of the frequency
and level of the sources. Envelopes vary the relative levels of the modulator and the target to
produce dynam
ic changes in timbre.



Wavetable

Synthesis


Wavetable synthesis
(
or sampling as it is also known
)

is the most widespread method for sound
generation.
It has become popular because of the low computational cost and ease of operation.
Basically recorded
or synthesized musical events are stored in the internal memory and are
played back on demand [
10
]. From there
you can manipulate the sound using different playback
tools of various techniques for sound variation such as pitch shifting, looping, envelopin
g, and
filtering. Pitch shifting allows the wavetable to play the sound at different pitches. Looping will
play the sound recursively during playback. Enveloping denotes the application of a time
varying gain function which consists of the attack, decay
, sustain, and release.



Makeup of a Synthesizer Today


Synthesizers today all include oscillators, envelopes, filters, and frequency modulators to allow
the user to manipulate and create sounds. Some synthesizers will include samplers for wavetable
6


synt
hesis as well. Also, most synthesizers will have more options and features for sound design
but all of them need at least oscillators, envelopes, and filters.


Oscillators are a control that repeats a waveform with a fundamental frequency and peak
amplitu
de [
13
]. They generate your starting waveform and are the first control most producers
use when creating sounds.
E
nvelopes are essentially the synthesizer’s time varying gain function
[
13
]. Basically the envelope is
a sequence of events that occur

every time a key is pressed. First,
there is the attack function, which is how long it takes for the sound to reach full volume. Next,
the decay,
which
is how long it takes for the sound to transition to a sustain level. After

the
decay
,
there is
the s
ustain function, which is how long you want the sound to be held at a certain
volume

for
. Finally, the release function, which is set for when the user wants the sound to start
decaying to zero or silence after the keyboard or MIDI keyboard key is lifted.


Filters are used to subtract frequency content. They generally behave like an equalizer but they
only subtract frequency content while equalizers can raise or lower the volume of entire
frequency ranges. There are many different types of filters that a
ccomplish different end goals
for sounds. The four basic types that appear in synthesizers today are lowpass, highpass,
bandpass, and bandreject filters.



Developing a Software Synthesizer


To create a sound we need to move an object, and in this case th
at object will be a set of speakers
or headphones. Keep in mind that the formula for generating a sim
ple sine

wave is y = sin(x).
Many programming languages have standard mathematics libraries with many of the
trigonometric functions represented. The mo
st basic synthesis methods follow this same general
scheme: a formula or function is defined that accepts a sequence of values as input [
12
].


Computers have soundcards that have digital
-
to
-
analog converters. Those soundcards are able to
generate an elect
rical signal from a digital number that is given to it. The number must be within
a certain range, like
-
1 to 1. Generally, 0 gives us the rest position of the speaker element, while
the positive number pushes the speaker forward and the negative number
pushes the speaker in
the other direction back into the speaker case. Basically, all that the programmer needs to do is
come up with a sequence of numbers to feed into the soundcard. The programmer would need to
send a long sequence of numbers and the so
undcard will go through them with exact timing and
feed each number to the D/A converter. This happens at a predetermined rate, 44100 samples
per second, or hertz (Hz). This rate is used because it allows the user to reproduce sounds over
the entire rang
e of the human ear.



Producing a Simple Sine Wave


A cycle for a sine wave is 2π radians long. Sine waves have a peak amplitude of +/
-
1 as well.
Using the sample rate discusse
d

earlier of 44100 cycles per second
(
since this is the range of the
entire
human ear
)

we can use the equation δ
ϕ

= 2



.
Fs is the sample rate and f is the desired
7


frequency, which is an inputted value by the user. Below, is some
pseudocode

that shows how to
write a function to produce the sine wave.


Figure 2: Pseudocode

to produce a sine wave


Input: Peak amplitude (A), Frequency (f)

Output: Amplitude value (y)


y = A * sin(phase)



phase = phase + ((2 * pi * f) / samplerate)



if phase > (2 * pi) then


phase = phase
-

(2 * pi)



Producing a Square Wave


A square wave is constructed very similar to a sine wave. We use the same approach by cycling
through a pattern with a phase variable and resetting once we exceed 2π radians.


Figure 3: Pseudocode to produce a square wave


Input: Peak amplitude (A), Frequency (f)

Output: Amplitude value (y)


if phase < pi then

y = A


else

y =
-
A



phase = phase + ((2 * pi * f) / samplerate)



if phase > (2 * pi) then



phase = phase
-

(2 * pi)



Virtual Studio Technology


Virtual Studio Technology (VST) is an interface for integrating software audio synthesizer and
effect plugins with audio editors and hard
-
disk recording systems [
9
]. VSTs use digital signal
processing to stimulate traditional recording studio hardware and software. VSTs are supported
by a large number of audio applications and were first developed in 1996 by Steinberg
Technologies where it can be licensed for use. Currently, they are on VST
3 and they allow any
third party developer to create a VST plugin for use within digital audio workstations.


8


VSTs run within your digital audio workstation (DAW) and are generally classified as either
instruments or audio effects. Generally, synthesizers

and samplers are

considered

instruments
,

while phasers, compressors,
and
reverb are effects. VSTs are the plugin standard for DAWs.
Steinberg’s VST SK is a set of C++ classes based around an underlying C API. They also have a
VST GUI creator available
so the programmer can add a graphical interface to their VST they
develop. This GUI developer is another set of C++ classes. In addition, there are several third
party ports available. These include a Java based version called jVSTwRapper, a Python ctype
s
-
based VST wrapper, and two .NET versions
:

Noise and VST.NET. There is also a Linux version
available, called the Linux Audio Developers Simple Plugin API or LADSPA, and there is even
a header for coding a synth in Ruby.



Conclusion


The use of softwar
e

synthesizers in music today are

really pushing for more advancement
s

in the
synthesis plugin community. There are many new methodologies being created today and many
improvements on the technology created back in the 1900s. As electronic music continues

to
grow
,

so will the demand for better software to allow the producer endless amounts of creativity
and options.

Synthesizers are not only used in music but they also have many other applications
that will see use, such as sound effects for movies or liv
e shows.



References


[1]


Alles, Harold G. "Music Synthesis Using

Real Time Digital Techniques." Retrieved
from
http://ieeexplore.ieee.org/stamp
/stamp.jsp?tp=&arnumber=1455942


[2]

Crombie, D.; Lenoir, R.; McKenzie, N.,
(2003, September).
"Producin
g accessible
multimedia music," Retrieved from
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&
arnumber=1233872&isnumber=27650


[3]

Echeverria, U.G.; Castro, F.E.G.; Lopez, J.M.D.B.,
(2010, February).
"Comparison
between a Hardware and a
software synthesizer,"

Retrieved from

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&a
rnumber=5440747&isnumber=544074
6


[4]

"electronic music."
Retrieved from
http://www.britannica.com/EBchecked
/topic/183823/electronic
-
music


[5]

Gibbons, J. A.; Howard, D.M.; Tyrrell, A.M.,
(2005, Sept
. 9).
"FPGA implementation of
1D wave equation for real
-
time audio synthesis,"
Retrieved from

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&
arnumber=1532084&isnumber=32679


[6]

Horner, A.,
(2000, July).
"Low peak amplitudes for wavetable synthesis,"
Retrieved

from

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=
&arnumber=848227&isnumber=18448


[7]

Lindemann, E.,
(2007, March).
"Music Synthesis with Reconstructive Phrase Modeling,"
9


Retrieved from
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&a
rnumber=4117931&isnumbe
r=411682
8


[8]

Ottewill, Matt. "Synthesis Types."
Retrieved from

http://www.planetoftunes.com/synth/synth_types.html


[9]

Phelan, Cormac; Bleakley, Chris J.; Cummins, Fred, (2009, June). "Adapting and
parameterising auditory
icons for use in a synthetic musical instrument,"
Retrieved from

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5524704&isnumber=5524662


[10]

Rabenstein, R.; Trautmann, L., (2001). "Digital sound synthesis by physical modelling,"

Retrieved

from
htt
p://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=938598&isnumber=20289


[11]

Seum
-
Lim, Gan. "Digital Synthesis of Musical Sounds."
Retrieved from



http://alumni.media.mit.edu/~gan/Gan/Education/NUS/Physics/MScThesis/


[12]

(2010, March 25). “Basic sound
theory and synthesis” Retrieved from

http://drpetter.se/article_sound.html


[13]

Burk, Phil; Polansky, Larry; Repetto, Douglas; Roberts, Mary; Rockmore, Dan,

Retrieved from
http://music.columbia.edu/cmc/musicandcomputers/