Developing Software Synthesizers

agerasiaetherealAI and Robotics

Nov 24, 2013 (3 years and 6 months ago)


Developing Software Synthesizers

Dan Bogucki

Computer Technology

University of Wisconsin



With the recent surge in popularity of electronic dance music, one has to look at music
synthesizers which are an integral part in creating sounds,
and are
the backbone of this type of
music. Music production using digital techniques have been replacing h
ardware components
because they are much cheaper than their hardware predecessors, more versatile, and more
accurate. There are many different types of software synthesizers available to the public today.
Each offer different characteristics and options
to manipulate sound waves, but are all based off
the same primary principals. Software synthesizers have been developed that are still based off
hardware designs, but offer improved reliability
tuning, more options

to manipulate sound,

more waveforms

than their hardware counterparts
This paper will delve deeper into what makes
up the core of a software synthesizer and the different types of synthesizers available

as well as
provide background information on sound waves and history of synthesizers.


Sound synthesis is the process of using electronics to create an electrical pressure sound wave
from scratch and then controlling and manipulating it [8]. Software

synthesizers have become

more advanced in recent years as well as more accessible. In addition, software synthesizers
have been able to mimic their hardware counterparts and improve on the design that was once
just designated to hardware components. Not only that, but through i
nnovation and
development, new software synthesis techniques have been created to offer users different ways
to manipulate sound waves to their preference. With the recent acceptance and popularity of
electronic danc
e music

software synthesizers will be u
sed more and become more advanced as
this music continues to grow.


Sound is moving energy that travels as a pattern of changing pressure [
]. You can also define
sound as

the perceived vibration or oscillation of air resulting from the vibration

of a sound

]. Your sound source can be anything, from a guitar sound board, to a speaker cone,
a vocal cord,
hair dryer,
car engine. A
nything that can produce sound is a sound source.


Sound Waves

As your sound source expands and cont

a pattern of changing air pressure en
ergy moves
away from the source.

This is
known as
a sound wave [
]. There are four basic sound wave
forms, a sine wave, a square wave, a sawtooth wave, and a triangle wave.

Figure 1: Picture of the basic wave

Every synthesizer will have these four basic wave forms that users will be able to manipulate. A
sine wave is created when a sound source expands and contracts absolutely consistently [
]. It is
said that the sound source is moving in simple har
monic motion [
]. This is not found in nature.
Sine waves can only be created by an electronic sound source, for example an oscillator you find
on a synthesizer. Sine waves are the building blocks of all complex sounds. This includes
square waves, sawt
ooth waves, and triangle waves.

You can think of all the

wave forms
as being
made up of a bunch of individual sine
waves of differing frequencies which interact and interfere with each other. It is good to note
that each individual vibration, whi
ch is a sine wave, is known as a harmonic. Square waves are
e up of at least 20 harmonic and
only contain odd harmonics as well. Triangle waves are
comprised of around 10 harmonics

which also only

contain odd harmonics. Sawtooth waves are
comprised o
f around 25 harmonics


include both even and odd harmonics, unlike the other


Sound Synthesizers

A sound synthesizer is an electronic instrument capable of producing and generating a great
variety of sounds by combining different frequencies [
. A synthesizer generates electric
or wave forms as they are also

which are converted to sound through speakers.

History of Sound Synthesizers

Technology for the sound synthesizer first started getting developed in the late 1860s.
von Helmholtz built a number of electro
mechanical oscillators to aid his research in human
perception of sound [
]. The devices he created just

generated simple sounds but
he laid the
groundwork for developing synthesizers later. Next, in 1876,

Elisha Gray created the musical
telegraph, which was based on the telephone technology [10]. In 1906 Lee de Forest invented
the vacuum
tube triode amplifier valve [
]. Engineers started to see the possibility of
using the
new technology

in creating ele
ctronic musical instruments.

It was

until the 1950s, when transistors became available

that there would be a breakthrough
in synthesis technology. Harald Bode created the Melochord in 1961 using this technology

It was the first v
oltage control
led synthesizer
. In 1964, Robert Moog constructed a transistor
controlled oscillator and amplifier

. A few years later h

created the first synthesizer
which was
called the Moog synthesizer

was the first synthesizer
that was made
ble to
the public.

While inventors like Bode and Moog were creating hardware synthesizers, Max Mathews of Bell
Telephone Labs began exploring the use of computers to generate sound. He first created a
software program called MUSIC I in 19
57 followed by M
USIC II in 1958

. These programs
were written in assembler code and were written for the IBM 704 mainframe. In 1960, Mathews
created MUSIC III which ran on the 2

generation IBM 7094


In 1962, alongside Joan
Miller, MUSIC IV was created, which was just an improved version of MUSIC III

. Again,
this was all done in assembler code. It was

until 1968, when MUSIC V was created

written in something other than assembler.
This time Matthews used FORTRAN

. He had
to reorganize the internal functions so he could overcome the inefficiencies of the language.

Also in 1968, Barry Vercoe developed MUSIC 360, which was a fast version of MUSIC IV for
the new generation of IBM
360 mainframes

. Then in 1973 Vercoe developed a compact
version of MUSIC called MUSIC11, because it was written in PDP
11 assembler code and was
specifically for the PDP
11 computer

. This was the first digital music synthesis program for
omputers and was programed for use with a keyboard and a teletypewriter Visual Display
Unit. In 1979, the Australian Fairlight CMI synthesizer was introduced

. This synthesizer
had two 6
octave keyboards
, its own graphics unit, and

a typewriter termi
nal. The user was able
to do different synthesis methods like additive synthesis, subtractive synthesis, and sampling

. Not only that, but
the user
were able to program the software


Hardware vs. Software Synthesizers

is a heavy debate as to whether one synthesiz
er is better than the other
. It is good to note
that there are many differences between hardware and software synthesizers. For example,
software synthesizers use
digital processor while hardware synthesizer
s use analog circuitry.
In addition, with computer technology rapidly advancing


software synthesizers. For the
price, it is possible for software programmers to offer more features for a give

price as well as
more customization. Also, hardware synthesizers generally can only do subtractive synthesis

mpling and additive synthesis are

just not feasible on hardware synthesizers. Even so, many
musicians prefer the character and sounds of a
nalog circuitry

so it really boils down to personal

Turning to Software Synthesizers

With personal computers being a main commodity in most households

anyone can use a
software synthesizer. Computers are faster, cheaper, and more reliable

than hardware
synthesizers and most popular sound cards today have the a
bility to emulate oscillators,
nvelopes, filters, frequency modulators, samplers, and any other feature the programmer wants
to include. With processors being faster and better than

ever, it’s possible to compute sounds in
real time now. In addition, new algorithms and more powerful software give desktop computers
the functionality of studio equipment.

New coding schemes for high quality audio and higher bitrates for file transmiss
ion means that
digital music recordings are freely available on the internet for anyone to access and download.
Many software synths to
day allow the user to connect

virtual synths in familiar and new ways for
endless customizing. With the technology out
there today, it’s really hard to tell the difference
between hardware and software synthesizer sounds. Also, software developers are able to
interact with

operating systems and applications to create sounds at a higher level in generic
terms, rather than
a hardware specific way so there is less of a limit on the programmer and user.

In addition, software is generally cheaper than the hardware version and you generally get more
value for your dollar.

Synthesizer Methodology

There are four main types of
synthesis methodology. They include additive synthesis,
subtractive synthesis, frequency modulation or FM synthesis, and wavetable synthesis. These
are not the only synthesis methodologies available but they are the ones users will come across
the most w
hen dealing with synthesizers.


Additive Synthesis

Additive synthesis is producing sounds by adding different waveforms together [
]. This
process is based on the Fourier theory. The Fourier theory can be described by considering

waveforms as blocks that

when put together, add up to a different waveform completely and
therefore a more complex sound can be obtained [
]. Dynamic changes in the waveform are
created by varying the relative amplitudes of as many as several doze
n waveforms. Additive
synthesis has the most potential to create sounds that can mimic musical instruments the best.



Subtractive synthesis produces sounds by generating a waveform that contains more har
content than a sine wa
ve. T
hen the waveform passes through a filter which subtracts harmonics
to obtain

the desired sound [
]. It is basically the reverse of additive synthesis.

Frequency Modulation


Frequency m
odulation synthesis uses rich FM sidebands as
harmonics for the synthesized
waveform [
]. This FM technique is applied digitally through FM operators. An operator has a
digital waveform which is usually a sine waveform generator and an envelope. The output of
one operator is routed to modulate the

frequency of another operator. Modulation of one sine
wave by another waveform produces more complex sounds that are dependent of the frequency
and level of the sources. Envelopes vary the relative levels of the modulator and the target to
produce dynam
ic changes in timbre.



Wavetable synthesis
or sampling as it is also known

is the most widespread method for sound
It has become popular because of the low computational cost and ease of operation.
Basically recorded
or synthesized musical events are stored in the internal memory and are
played back on demand [
]. From there
you can manipulate the sound using different playback
tools of various techniques for sound variation such as pitch shifting, looping, envelopin
g, and
filtering. Pitch shifting allows the wavetable to play the sound at different pitches. Looping will
play the sound recursively during playback. Enveloping denotes the application of a time
varying gain function which consists of the attack, decay
, sustain, and release.

Makeup of a Synthesizer Today

Synthesizers today all include oscillators, envelopes, filters, and frequency modulators to allow
the user to manipulate and create sounds. Some synthesizers will include samplers for wavetable

hesis as well. Also, most synthesizers will have more options and features for sound design
but all of them need at least oscillators, envelopes, and filters.

Oscillators are a control that repeats a waveform with a fundamental frequency and peak
de [
]. They generate your starting waveform and are the first control most producers
use when creating sounds.
nvelopes are essentially the synthesizer’s time varying gain function
]. Basically the envelope is
a sequence of events that occur

every time a key is pressed. First,
there is the attack function, which is how long it takes for the sound to reach full volume. Next,
the decay,
is how long it takes for the sound to transition to a sustain level. After

there is
the s
ustain function, which is how long you want the sound to be held at a certain

. Finally, the release function, which is set for when the user wants the sound to start
decaying to zero or silence after the keyboard or MIDI keyboard key is lifted.

Filters are used to subtract frequency content. They generally behave like an equalizer but they
only subtract frequency content while equalizers can raise or lower the volume of entire
frequency ranges. There are many different types of filters that a
ccomplish different end goals
for sounds. The four basic types that appear in synthesizers today are lowpass, highpass,
bandpass, and bandreject filters.

Developing a Software Synthesizer

To create a sound we need to move an object, and in this case th
at object will be a set of speakers
or headphones. Keep in mind that the formula for generating a sim
ple sine

wave is y = sin(x).
Many programming languages have standard mathematics libraries with many of the
trigonometric functions represented. The mo
st basic synthesis methods follow this same general
scheme: a formula or function is defined that accepts a sequence of values as input [

Computers have soundcards that have digital
analog converters. Those soundcards are able to
generate an elect
rical signal from a digital number that is given to it. The number must be within
a certain range, like
1 to 1. Generally, 0 gives us the rest position of the speaker element, while
the positive number pushes the speaker forward and the negative number
pushes the speaker in
the other direction back into the speaker case. Basically, all that the programmer needs to do is
come up with a sequence of numbers to feed into the soundcard. The programmer would need to
send a long sequence of numbers and the so
undcard will go through them with exact timing and
feed each number to the D/A converter. This happens at a predetermined rate, 44100 samples
per second, or hertz (Hz). This rate is used because it allows the user to reproduce sounds over
the entire rang
e of the human ear.

Producing a Simple Sine Wave

A cycle for a sine wave is 2π radians long. Sine waves have a peak amplitude of +/
1 as well.
Using the sample rate discusse

earlier of 44100 cycles per second
since this is the range of the
human ear

we can use the equation δ

= 2

Fs is the sample rate and f is the desired

frequency, which is an inputted value by the user. Below, is some

that shows how to
write a function to produce the sine wave.

Figure 2: Pseudocode

to produce a sine wave

Input: Peak amplitude (A), Frequency (f)

Output: Amplitude value (y)

y = A * sin(phase)

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then

phase = phase

(2 * pi)

Producing a Square Wave

A square wave is constructed very similar to a sine wave. We use the same approach by cycling
through a pattern with a phase variable and resetting once we exceed 2π radians.

Figure 3: Pseudocode to produce a square wave

Input: Peak amplitude (A), Frequency (f)

Output: Amplitude value (y)

if phase < pi then

y = A


y =

phase = phase + ((2 * pi * f) / samplerate)

if phase > (2 * pi) then

phase = phase

(2 * pi)

Virtual Studio Technology

Virtual Studio Technology (VST) is an interface for integrating software audio synthesizer and
effect plugins with audio editors and hard
disk recording systems [
]. VSTs use digital signal
processing to stimulate traditional recording studio hardware and software. VSTs are supported
by a large number of audio applications and were first developed in 1996 by Steinberg
Technologies where it can be licensed for use. Currently, they are on VST
3 and they allow any
third party developer to create a VST plugin for use within digital audio workstations.


VSTs run within your digital audio workstation (DAW) and are generally classified as either
instruments or audio effects. Generally, synthesizers

and samplers are



while phasers, compressors,
reverb are effects. VSTs are the plugin standard for DAWs.
Steinberg’s VST SK is a set of C++ classes based around an underlying C API. They also have a
VST GUI creator available
so the programmer can add a graphical interface to their VST they
develop. This GUI developer is another set of C++ classes. In addition, there are several third
party ports available. These include a Java based version called jVSTwRapper, a Python ctype
based VST wrapper, and two .NET versions

Noise and VST.NET. There is also a Linux version
available, called the Linux Audio Developers Simple Plugin API or LADSPA, and there is even
a header for coding a synth in Ruby.


The use of softwar

synthesizers in music today are

really pushing for more advancement

in the
synthesis plugin community. There are many new methodologies being created today and many
improvements on the technology created back in the 1900s. As electronic music continues


so will the demand for better software to allow the producer endless amounts of creativity
and options.

Synthesizers are not only used in music but they also have many other applications
that will see use, such as sound effects for movies or liv
e shows.



Alles, Harold G. "Music Synthesis Using

Real Time Digital Techniques." Retrieved


Crombie, D.; Lenoir, R.; McKenzie, N.,
(2003, September).
g accessible
multimedia music," Retrieved from


Echeverria, U.G.; Castro, F.E.G.; Lopez, J.M.D.B.,
(2010, February).
between a Hardware and a
software synthesizer,"

Retrieved from


"electronic music."
Retrieved from


Gibbons, J. A.; Howard, D.M.; Tyrrell, A.M.,
(2005, Sept
. 9).
"FPGA implementation of
1D wave equation for real
time audio synthesis,"
Retrieved from


Horner, A.,
(2000, July).
"Low peak amplitudes for wavetable synthesis,"



Lindemann, E.,
(2007, March).
"Music Synthesis with Reconstructive Phrase Modeling,"

Retrieved from


Ottewill, Matt. "Synthesis Types."
Retrieved from


Phelan, Cormac; Bleakley, Chris J.; Cummins, Fred, (2009, June). "Adapting and
parameterising auditory
icons for use in a synthetic musical instrument,"
Retrieved from


Rabenstein, R.; Trautmann, L., (2001). "Digital sound synthesis by physical modelling,"




Lim, Gan. "Digital Synthesis of Musical Sounds."
Retrieved from


(2010, March 25). “Basic sound
theory and synthesis” Retrieved from


Burk, Phil; Polansky, Larry; Repetto, Douglas; Roberts, Mary; Rockmore, Dan,

Retrieved from