Sunteți pe pagina 1din 245

Digital Synthesis

Computer Assisted Composition


With
SuperCollider

David Cottle

5/8/02

Copyright © May 2002


by David Cottle

SuperCollider 2.2.1, copyright © December 1998

by James McCartney

1
Contents
Index of Examples........................................................................................................................... 5
1. Digital Synthesis and Computer Assisted Composition Using SuperCollider........................10
Introduction ............................................................................................................................. 10
2. The Language, Programming Basics....................................................................................... 11
Basics.......................................................................................................................................11
Error messages ........................................................................................................................ 13
Objects, messages, arguments, variables, functions, and arrays ............................................. 14
Enclosures (parentheses, braces, brackets).............................................................................. 16
Sandwich.make........................................................................................................................17
Experimenting With a Patch (Just for fun).............................................................................. 20
Section I Digital Synthesis ............................................................................................................ 22
3. The Science of Sound.............................................................................................................. 22
Frequency ................................................................................................................................ 22
Amplitude................................................................................................................................ 23
Phase........................................................................................................................................ 24
Just For Fun ............................................................................................................................. 25
4. Keyword Assignment, MouseX.kr and MouseY.kr, Linear and Exponential values ............. 27
Keyword Assignment .............................................................................................................. 27
MouseX.kr and MouseY.kr ..................................................................................................... 29
Just For Fun ............................................................................................................................. 30
5. Variables, Comments, Triggers............................................................................................... 32
Variables and Comments......................................................................................................... 32
Triggers, Gates ........................................................................................................................ 34
Just For Fun ............................................................................................................................. 35
6. Envelopes, Reciprocals ........................................................................................................... 38
Envelopes ................................................................................................................................ 38
Just For Fun ............................................................................................................................. 42
7. Intervals ................................................................................................................................... 45
8. Additive or Fourier Synthesis, Random Numbers, Debugging and Postln, CPU usage......... 49
Random Numbers, Perception................................................................................................. 53
Bell Array ................................................................................................................................ 56
Debugging, Postln, Comments................................................................................................ 57
Random Bell Patch.................................................................................................................. 59
CPU Usage .............................................................................................................................. 60
Just For Fun ............................................................................................................................. 61
9. Subtractive Synthesis, Noise, Synth.write, Synth.record ........................................................ 63
Chimes..................................................................................................................................... 66
Synth.write, Synth.record........................................................................................................ 67
Just for Fun.............................................................................................................................. 68
10. Karplus/Strong.........................................................................................................................70
Karplus-Strong Pluck Instrument............................................................................................ 70
Just For Fun: Karplus-Strong Patch ........................................................................................ 73
11. Time Variant Control Sources, Offset and Scaling with Mul and Add .................................. 75
Offset and Scaling with Mul and Add..................................................................................... 75

2
12. Wave Forms, FM/AM Synthesis, Sequencer, Sample and Hold, Real Time Monitoring with
Peep ......................................................................................................................................... 80
Wave Forms ............................................................................................................................ 80
FM and AM synthesis ............................................................................................................. 82
Sequencer ................................................................................................................................ 84
Sample and Hold ..................................................................................................................... 86
Just For Fun ............................................................................................................................. 88
13. GUI Interface........................................................................................................................... 89
GUI Interface........................................................................................................................... 89
Linking GUI Items to a Patch and to Each Other.................................................................... 91
Gui Monitor............................................................................................................................. 93
Section II Computer Assisted Composition ................................................................................ 101
14. Numbers, Operators, Music Functions.................................................................................. 101
Operators, Precedence ........................................................................................................... 101
Messages, Arguments, Receivers.......................................................................................... 103
15. User Defined Functions with Arguments, Expressions, Variables ....................................... 107
16. Iteration Using do(), Comments, "post here always"............................................................ 113
17. Control Using if(), and do() continued, Arrays ..................................................................... 117
Control message "if" ............................................................................................................. 117
Just For Fun ........................................................................................................................... 121
18. Collections, Arrays, Array Messages .................................................................................... 123
19. Strings, and Arrays of strings, the .at() message ................................................................... 126
20. Making music ........................................................................................................................ 130
Review of language:.............................................................................................................. 130
Array messages...................................................................................................................... 131
A Moment of Perspective...................................................................................................... 132
Actual Music Examples: How SC Turns Numbers into Sounds........................................... 133
Example Functions ................................................................................................................ 137
21. More Random Numbers ........................................................................................................ 138
Random Functions in SC....................................................................................................... 139
Filters..................................................................................................................................... 139
Biased Random Choices........................................................................................................ 139
rand........................................................................................................................................ 143
rand2...................................................................................................................................... 143
linrand.................................................................................................................................... 143
bilinrand.................................................................................................................................144
sum3rand ............................................................................................................................... 144
windex ................................................................................................................................... 144
22. SuperCollider Synthesis Basics............................................................................................. 146
Self-Documentation............................................................................................................... 146
Environment Model............................................................................................................... 149
23. The Aesthetics of Computer Music....................................................................................... 155
Mutation ................................................................................................................................ 155
Escaping Human Bias ........................................................................................................... 156
Ignorant iteration ................................................................................................................... 157
Thorough iteration................................................................................................................. 158

3
24. Total Control, Serialization, MIDI Out ................................................................................. 159
Serialization: Moving to new values ..................................................................................... 161
Ladders, Boundaries.............................................................................................................. 163
Versions of the series or array............................................................................................... 164
25. Total Control Continued, Serialization using Pbind, Pseq, Prand.........................................167
Pfunc, Pseq, Prand................................................................................................................. 167
26. Total Control Continued, Special Considerations................................................................. 171
Absolute vs. Proportional Values, Rhythmic Inversion ........................................................ 171
Pitch....................................................................................................................................... 171
Duration and next event ........................................................................................................ 172
Next Event............................................................................................................................. 172
Non-Sequential Events .......................................................................................................... 172
Amplitude.............................................................................................................................. 173
Rhythmic Inversion ............................................................................................................... 173
27. Music Driven by Extra-Musical Criteria, Data Files ............................................................ 175
Extra Musical Criteria ........................................................................................................... 175
Text........................................................................................................................................ 175
Mapping.................................................................................................................................176
Working With Files ............................................................................................................... 182
28. Markov Chains, Numerical Data Files .................................................................................. 184
Data Files, Data Types .......................................................................................................... 192
Interpreting Strings................................................................................................................ 194
29. Sound Files, Music Concrète.................................................................................................196
30. Tuning Systems ..................................................................................................................... 201
APPENDIX ................................................................................................................................. 202
A. Distribution using SCPlay ...................................................................................................... 202
Changing Libraries, Editing Main.sc, Recompiling, Compressing....................................... 203
Compressed Libraries............................................................................................................ 205
B. Patches for practice................................................................................................................. 207
Patch I: Latch or Sample and Hold ....................................................................................... 207
Patch II; Pulse........................................................................................................................ 211
Patch III FM .......................................................................................................................... 216
Patch IV Sequencer ............................................................................................................... 220
Patch V Filter.........................................................................................................................226
Using Pbind ........................................................................................................................... 230
C. Pitch Chart: ............................................................................................................................. 237
D. UNIT GENERATORS: .......................................................................................................... 239

4
Index of Examples

2.1 First Patch (play, SinOsc, LFNoise0, .ar) 13


2.2 Second Patch (scope, RLPF, LFSaw, LFNoise1, choose, []) 13
2.3 Arguments (scope, SinOsc, LFNoise0) 16
2.4 Defaults (scope, ar, SinOsc) 17
2.5 Choosing values from an array (array, choose) 19
2.6 experiment (LFNoise0, SinOsc, RLPF, LFSaw, LFNoise1, choose) 21
2.7 experiment 21
3.1 SinOsc 23
3.2 LFSaw 23
3.3 amp 24
3.4 distortion 24
3.5 phase 24
3.6 plot 25
3.7 pi and phase 25
3.8 phase and scope 25
3.9 Just for fun (CombN, SinOsc, abs, LFNoise1, LFSaw, array) 26
4.1 Defaults 27
4.2 keywords, indents 28
4.3 MouseX (LFNoise0, SinOsc, mul) 29
4.4 MouseY 29
4.5 MouseX controlling frequency 29
4.6 exponential change 30
4.7 Just for fun (MouseX and Y, OverlapTexture, Pan2) 30
5.1 Variable declaration, assignment, and comments 33
5.2 Trigger (Impulse, Dust) 34
5.3 Sequencer (array, midicps, Sequencer, Impulse) 35
5.4 Just for Fun (Dust, Sequencer, LFNoise1, OverlapTexture, CombN, RLPF, Pan2) 35
5.5 Compressed fun 37
6.1 Envelopes plotted (plot, perc, triangle, sine, linen, Env) 39
6.2 Complex Envelope (Env.new) 40
6.3 Envelope and Envelope Generator controlling amplitude (Impulse, linen, EnvGen, SinOsc) 40
6.4 Scaled envelope for frequency (EnvGen, Env, linen, mul, add) 41
6.5 Duration, attack, decay 41
6.6 Frequency and duration (scope, Impulse, .kr) 42
6.7 Frequency expressed as a ratio 42
6.8 Duration, attack, decay 42
6.9 Just For Fun: Crotales (scope, Dust, Impulse, kr, Tspawn, rrand, Env, perc, choose, EnvGen, LFNoise1,
LFNoise0, PMOsc, Mix, AllPassN 43
7.1 intervals (Mix, FsinOsc) 45
7.2 intervals 45
7.3 chord plot (plot, Mix, FsinOsc, array) 46
7.4 interval plots (plot, Mix, FsinOsc, arrays) 46
7.5 audio frequencies (Saw, MouseX, kr, scope) 47
7.6 ratios from LF to audio rate (scope, MouseX, LFSaw) 47
8.1 message chains (rand, choose, arrays, midicps, postln) 50
8.2 message chain (array, choose, rand, midicps, postln) 50
8.3 Wavetable (normalize, asWavetable, WavetableView, GUIWindow, HarmonicsDialog, close) 51
8.4 normalizeSum 52
8.5 random spectra (scope, Mix, SinOsc, arrays) 52
8.6 bell (Mix, SinOsc, arrays, normalizeSum) 53
8.7 rand 55
8.8 test array (Array, fill) 55

5
8.9 function error 55
8.10 random seed (thisThread, randSeed, postln) 56
8.11 random frequencies (Array, fill, scope, Mix, SinOsc) 56
8.12 postln, post (Array, fill, rand, normalizeSum, postln, Mix, SinOsc) 57
8.13 3 bells (Array, fill, scope, Env, perc, Mix, SinOsc, array, choose, EnvGen, Spawn) 58
8.14 arrays and math 59
8.15 random bells (Array, fill, rand, normalizeSum, Env, perc, Mix, Spawn, choose, kr, ar) 59
8.16 CPU usage (Array, series, Mix, scope, SinOsc, Saw) 60
8.17 harmonic spectra (LFNoise0, Blip, scope) 61
8.18 inharmonic spectra (Env, perc, Spawn, Pan2, SinOsc, EnvGen, kr, choose, rand2) 61
9.1 noise (scope, WhiteNoise, PinkNoise, BrownNoise, GrayNoise, Dust) 63
9.2 Filtered Noise (scope, PinkNoise, MouseX and Y, RLPF, RHPF, BPF) 64
9.3 Saw with Filter (scope, LFSaw, MouseX and Y, RHPF) 65
9.4 Resonant array (scope, Klank, BrownNoise, array, Array, fill) 65
9.5 chime burst (Env, perc, PinkNoise, EnvGen, Spawn, scope) 66
9.6 chimes (Array, fill, rrand, normalizeSum, round, Env, perc, Klank, EnvGen, MouseY, Spawn) 66
9.7 Synth.record (PinkNoise) 68
9.8 Subtracitive Synthesis Fun (Mix, Array, fill, Pan2, Klank, Decay, Dust, PinkNoise, rand2, RLPF,
normalizeSum, GrayNoise, LFSaw) 68
10.1 noise burst (scope, EnvGen, PinkNoise) 70
10.2 burst and delay (PinkNoise, EnvGen, Env, perc, CombL) 71
10.3 reciprocal 71
10.4 midi to cps to reciprocal 71
10.5 pluck (scope, midicps, reciprocal, EnvGen, Env, perc, PinkNoise, CombL) 71
10.6 Spawn and pluck (Spawn, scope, midicps, reciprocal, EnvGen, Env, perc, PinkNoise, CombL) 72
10.7 expanded pluck (scope, midicps, choose, reciprocal, EnvGen, Env, perc, PinkNoise, CombL, RLPF,
LFNoise1, AllpassN, Spawn) 73
11.1 add and mul; offset and scale 75
11.2 confusing; mul: 300, add: 100, range: 200 to 400 76
11.3 less confusing? 77
11.4 SinOsc as vibrato 77
11.5 vibrato 77
11.6 Line.kr 78
12.1 wave forms (plot, SinOsc, Saw, LFTri, Pulse, PinkNoise, LFNoise0 80
12.2 Saw, LFSaw, Pulse, LFPulse 81
12.3 LF waves (SinOsc, LFPulse, LFSaw, LFTri, mul, add) 81
12.4 LF control (SinOsc, LFTri, mul, add) 81
12.5 synthetic sounds (scope, SinOsc, LFTri, MouseX, mul, add) 82
12.6 AM Synthesis (SinOsc, scope, mul, Saw) 83
12.7 FM Modulation (scope, SinOsc, mul) 83
12.8 MouseX and MouseY controlling FM frequency and index 83
12.9 midicps 84
12.10 Sequencer (array, midicps, SinOsc, Sequencer, Impulse, kr) 84
12.11 Dust.ar (array, midicps, SinOsc, Sequencer, Dust, kr) 85
12.12 scramble, reverse (Array, fill, postln, scramble, reverse) 85
12.13 sequencer variations (array, scramble, midicps, Sequencer, kr, Dust) 85
12.14 Latch (Blip, Latch, LFSaw, Impulse, mul) 86
12.15 Complex Wave as Sample Source (Mix, SinOsc, Blip, Latch, Mix, Impulse) 87
12.16 Peep (play, SinOsc, Peep, LFNoise0, mul, add, Blip, Latch, Impulse) 87
12.17 Latch and MIDI pitches (Blip, Latch, SinOsc, MouseX, mul, add, Impulse, floor, midicps) 88
12.18 Just For Fun, DegreeToKey sample and hold 88
13.1 Gui Window 90
13.2 GUI Window in a patch 91
13.3 GUI items linked to each other 91
13.4 More GUI 92
13.5 Simple patch using postln to monitor 93

6
13.6 using GUI to monitor 94
13.7 Updated values in GUI 95
13.8 complex GUI monitor 96
13.9 GUI monitor; single string view 97
13.10 GUI monitor; seqInst 98
13.11 midipcs and midips 99
14.1 Evaluation 101
14.2 Operators (+, /, -, *) 102
14.3 More operators 102
14.4 Binary operators (>, <, ==, %) 102
14.5 Predict 103
14.6 Music related messages (cos, abs, sqrt, midicps, cpsmidi, midiratio, rand, rand2, rrand) 103
14.7 Coin 104
14.8 Reciever notation (cos, coin, rand) 104
14.9 Binary functions (min, max, round) 104
14.10 min and max 104
14.11 Receiver notation 105
14.12 nesting (max, min, midicps, rand) 105
14.13 Several lines of code (midi, postln, max) 105
14.14 message strings (midicps, post, min) 106
15.1 Variables 108
15.2 Variable declaration 108
15.3 Function 108
15.4 Function with arguments 109
15.5 Function with arguments and variables 109
15.6 Function calls 110
15.7 Keywords 111
15.8 Return 111
16.1 Function 114
16.2 function passed as variable 114
16.3 do prototype 115
16.4 do example 115
16.5 do with comments 116
16.6 do in receiver 116
17.1 if examples 117
17.2 if commented 118
17.3 if examples 118
17.4 10.do 119
17.5 do(10) with arguments 119
17.6 do([array] with arguments 119
17.7 10 boings 120
17.8 array of boings 120
17.9 pitch class do 120
17.10 new line 120
17.11 new line 121
17.12 Just for fun, arpeggios (scope, Spawn, Pan2, SinOsc, midicps, EnvGen, Env, perc, rand2) 121
18.1 post array 124
18.2 array math 124
18.3 array.do and math 124
18.4 array + each item 124
18.5 two arrays 125
18.6 testing an array 125
19.1 "C" + 5? 127
19.2 pitch array index 127
19.3 random pitch array 127
19.4 random pitch class 128

7
19.5 concatenated string 128
19.6 finesse 129
20.1 arrays messages 131
20.2 Illiac suite? 133
20.3 Synth 134
20.4 Synth.play 134
20.5 Synth with keywords 134
20.6 Random sequencer 135
20.7 Commented Synth random sequence 135
20.8 pitchFunc 136
20.9 Pitch functions 137
21.1 bias 140
21.2 bias float 141
21.3 bias 141
21.4 bias 141
21.5 bias 142
21.6 test bias 142
21.7 Text float bias 142
21.8 rand test 143
21.9 rand2 test 143
21.10 linrand test 144
21.11 bilinrand test 144
21.12 windex 145
21.13 windex test 145
22.1 Help files 147
22.2 More help 147
22.3 SinOsc patch 148
22.4 SinOsc with keywords 149
22.5 blipInst 150
22.6 Pbind 150
22.7 Biased random total control 151
22.8 Total control model 152
22.9 tempo 153
23.1 I Dig You Don't Work 158
24.1 MIDI Total Control 160
24.2 choosing versions 165
24.3 choosing arrays 165
24.4 reverse, scramble, transpose 165
24.5 noteFunc 165
25.1 Original, Retrograde, Inversion, Inverted-Retrograde 168
25.2 Original, Retrograde, Inversion, Inverted-Retrograde 168
25.3 12-tone Pbind 168
25.4 Random walk? 169
25.5 Sharing values in the environment 169
27.1 Array string 176
27.2 ascii values 176
27.3 pitchMap 177
27.4 mapping array 177
27.5 EMC pitch 177
27.6 reading a file 183
27.7 reading a file 183
28.1 Frere Jacque markov chart 187
28.2 transTable 188
28.3 Parsing the transTable 189
28.4 Probability chart 189
28.5 Simple Markov 190

8
28.6 test ascii 193
28.7 data files 193
28.8 interpreting a string 195
29.1 Playing a soundfile 196
29.2 loops 197
29.3 sound file array 198
29.4 concrete study 199
30.1 Library.put 204

9
1. Digital Synthesis and Computer Assisted Composition Using
SuperCollider

Introduction

1 Assignment: No assignment of this writing.

This text is a compilation of materials I've used to teach our courses in Digital Synthesis and
Computer Assisted Composition. I originally created all the files in SuperCollider itself, but
students began to complain about the organization; absent page numbers, table of contents,
indexes, and such. SuperCollider (SC) is a great program for teaching because you can mix text
with examples that actually work, generate sound, and can be tested in the same document that
teaches a technique. But it really wasn't designed for publishing text. So I decided to convert the
course materials into a more publishable format. This document is the result. It isn't meant to
replace any of the tutorials or help files available in SC. You should read them too.

There are two sections. The first focuses on digital synthesis, the second on computer assisted
composition topics. I assume no experience in programming since many of our students have
never programmed. There are some chapters on language, but most of the examples of actual
synthesis do not rely heavily on programming knowledge. You could skip the language part and
still be able to work with the synthesis examples (though I don't recommend it). The idea is to
get you generating sound quickly.

The chapters on synthesis are written as an auxiliary companion to a good synthesis text. (In
other words, I don't want to spend a lot of time explaining what a sine wave is. Rather, I'll
demonstrate how to generate a sine wave in SC.) The text that I follow is An Introduction to the
Creation of Electroacoustic Music by Samual Pellman.

A note on format; everything that is in courier font is code that you are supposed to type and
execute in SC. There should be a companion file distributed with this text which contains the
lines of code only, extracted to a text file. You can use this text file (open the file in SC) to run
the examples. This should save having to retype or copy and paste these examples into SC. The
code file is a bit dense, but you should be able to run each line or set of lines comprising a patch
without reformatting.

I am more of a composer than a computer scientist (and even less of a writer), and I come from a
background in C. My OOP terminology may be incorrect and I'm sure the writing could be
better. I apologize in advance. I'd be happy to get your comments and corrections. Send them to
cottle@cerlsoundgroup.org.

SC is created and maintained by James McCartney. For more information on the program
connect to http://www.audiosynth.com.

10
2. The Language, Programming Basics

Basics

2 Assignment:
a) Identify the objects, messages, arguments, arrays, and functions in the examples below.
b) Modify the patches below. Save each of the modified patches in a separate file (named
correctly to receive credit) and hand it in the assignment drop box folder.

Synth.play({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}, 5)

Synth.scope(
{
RLPF.ar(
LFSaw.ar([8, 12], 0.2),
LFNoise1.ar([2, 3].choose, 1500, 1600),
0.05
)
}, 0.05
)

SuperCollider is a text editor, programming language, compiler, and digital synthesizer all in
one. This means you use SC to create and edit lines of code that contain instructions for playing
sounds. You can then use SC to evaluate and debug the code, compile or run the code, and the
result (if your code is correct) should be real time audio. The text editing functions (select, copy,
replace, etc.) are similar to any basic editor. But there are a few handy editing features that are
unique to, and useful in compilers. If you've never written code their usefulness is not
immediately apparent. So I suggest you come back and review these often. They are:

Keys Description Practical use

command-, Go to line number When debugging it is often useful to move


directly to the line number displayed in the
error message.

command-/ Make selected lines a When trying different variations of code it


comment is often useful to "comment out" entire
sections of code using the comment
command. Commenting lines makes them
invisible to the compiler and they are no
longer used in the patch.

Shift-command-/ Uncomment selected lines Same as above.

11
command-` Balance enclosures This not only allows you to check to make
sure your enclosures are balanced, but will
often clarify argument lists by showing
both enclosures surrounding the list.

command-[ Shift code left When breaking large sections of code into
several lines, indents are used to show the
hierarchy of a group of lines. This key
allows you to indent an entire group of
lines at once.

command-] Shift code right Same as above.

command-. Stop Playback Stops playback.

command-H Show help file for item Opens the help file for a given item.

command-K Post to this window always Error messages, and debugging messages
that you insert into your patches are
normally posted to whatever window you
have open. If it is the file you are working
on then the posted message becomes
unwanted text which must be removed.
Post here always allows you to open a new
window and designate it as the place to
post messages, keeping the file you are
working on clean.

Menu item Syntax Colorize Colorizing code clarifies the function of


specific elements such as messages,
objects, comments, values, variables, etc.

double click on Selects all code between Not only helps clarify enclosures, but if
any enclosure; enclosure you add a "(" and ")" at the beginning and
"{", "}", "[", "]", end of a patch you can then select the
"(", or ")" entire patch by double clicking on either
enclosure.

command-9 Reduce the output volume.

command-0 Increase the volume.

Once you have written lines of code you can use SC to evaluate the expression. "Evaluation" can
result in either a message from the system about the code, or numbers (data) resulting from the
expression or functions (like a calculator), or sounds from a synthesizer patch. Both data and

12
sound can be useful in composition. The data that might result from an SC patch could be pitches
for transcription or instructions to the performer or the results of analysis. But our goal will be
sound.

There are two ways to evaluate code: command-p, and the "enter" key (not "return," but "enter").
The method we will use most often is the enter key. To designate what code should be evaluated
you select it. This is often the entire file so you can use command-a (select all), but it will not
always be the entire file. If it is only a single line you can just place the cursor on that line then
hit "enter." Before we try it with the example below let me remind you that command-period
stops playback. (Press the command key and period key at the same time.) Lines 1 through 10
are two examples to try. To try the first one, type the line in SC, position the cursor anywhere
within the line, and press enter. (You can also select the entire line and press enter.) To run the
second example you have to select lines 1 through 9 using click and drag, then press enter.
2.1 First Patch (play, SinOsc, LFNoise0, .ar)

Synth.play({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}, 5)

2.2 Second Patch (scope, RLPF, LFSaw, LFNoise1, choose, [])

Synth.scope( //line 1
{
RLPF.ar(
LFSaw.ar([8, 12], 0.2),
LFNoise1.ar([2, 3].choose, 1500, 1600),
0.05
)
}, 0.05
) //line 9
//end patch

Error messages

One thing you will notice right away about SC (or any compiler) is that it is very unforgiving
about syntax and punctuation. It can't interpolate for you. You have to have every comma,
period, and semicolon in the correct spot. You have to be careful about upper case and lower
case letters. If you make any of these errors the program will print an error message. Sometimes
the error messages are dense and hard to decipher. But usually they will at least be pointing to
the spot where you made an error. (Or more correctly where it could no longer parse the code.
The actual error may be before that.) Even if you don't understand what the error message says,
you should at least conclude that you probably typed in something wrong and you need to look at
the code more carefully. Here is a typical error message.

• ERROR: Parse error

in file 'selected text'

line 1 char 54 :

Synth.play({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0 0.3•)}, 5)

13
-----------------------------------

• ERROR: Command line parse failed

"Parse error" means that it wasn't able to parse the code. The third line of the error message tells
you where in the file the error was encountered. In this case, line 1 char (character) 54. (This is
essential for finding the error—you use this information in conjunction with "go to line number"
to get to your errors quickly.) Then there is a "•" in the error message that tries to point to the
spot where the parsing failed. You usually have to look carefully at the characters just before the
"•" to find the problem.

Compare the line in the error message to original line 1. Can you spot the error? There is a
missing comma. This is a common error for beginning programmers. Learning to read the error
message and using it to figure out where your punctuation is wrong will be useful. And don't be
discouraged with the error messages. They are normal. While I was writing one of the examples
below (about 12 lines of code) I had to run and correct errors six times (not just six errors, but
running it, correcting it and running it again six times). I should probably be more careful, but
my point is even people who have been doing this a lot make small typing errors. It's so easy to
run the code, then correct the errors I don't really think of it as making a mistake, but rather using
the program to "spell check" my code.

Objects, messages, arguments, variables, functions, and arrays

I don't intend to spend a lot of time with the language or syntax in SC. But there are six things
that you really need to understand: objects, messages, arguments, variables, functions, and
arrays. Here are some examples:

Term SC Examples Explanation

Objects Synth, SinOsc, LFNoise, There are many different kinds of objects
EnvGen, 5, 6.7, "string", etc in SC. In the two examples above all the
objects begin with caps. Numbers are
also objects. Text inside quotation marks
is an object.

Messages play, scope, ar, max, rand, Messages begin with lower case letters.
midicps When you see them in code they are after
an object and a period ("ar" for example
is a message and would be found in code
as: "SinOsc.ar.")

Argument and (1, 2, 4) Arguments are numbers, values,


Argument lists ("string", 45, myVar) expressions, functions, arrays, and are
({function}, 0.1, [1, 2, 3]) usually found in an argument list.
Argument lists are enclosed in

14
parentheses and separated by commas.
They follow a message.

Variables pitchClass, nextCount, Variables are names that the programmer


freqArray, myVar, etc. (you) declares and uses throughout the
program. They can have any name or
spelling you want but must begin with
lower case letters (not numbers), are a
single word without spaces, but often
include caps in the middle for clarity.

Enclosures (), {}, [] Enclosures are parentheses, brackets, or


braces. In this text I will use the term
enclosure for any of the three. Enclosures
are important structural indicators. They
show where functions begin and end,
where argument lists and arrays begin
and end.

Function {lines of code} A function contains lines of code


enclosed in braces.

Arrays [1, 2, 3], ["C", "D"], [a, b, c] An array is a list of items enclosed in
brackets, separated by commas. An array
can be used as an object, or as part of an
argument list.

Unit LFNoise0.ar, SinOsc.ar, Ugens is a termed used to describe


Generators of Sequencer.ar combinations of code that generate
Ugens values. They are analogous to modules
on a classic synthesizer.

It is essential that you become comfortable with identifying these elements in code. Before we
discuss each of these separately look again at the Synth examples above and practice identifying
each of the items. I'll do the first one: "Synth," "SinOsc," "LFNoise0," are objects. "play" and
"ar" are messages. Functions and arguments are a little harder to spot. All of the text
"{SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}" is a function (everything between { and }).
Arguments are confusing because they can often be nested. But the way you spot them is to look
for a message such as ".ar" followed by an opening parenthesis. All the items between that
opened parenthesis and the matching closing one are the arguments. Each argument is separated
by a comma. In the code "LFNoise0.ar(10, 400, 800)" the ".ar" is the message, so "(10, 400,
800)" is a list of three arguments for the ar message. Spotting arguments inside a message is
essential because they represent the nature of the qualities of sound. Take the following patch as
an example. The arguments for LFNoise0.ar are (10, 400, 700). Change these three values and
run the code to see how they change the sound. The only rule is that the third number (700)
should always be at least 100 higher than the second number. If you make the second number

15
1200, then the third number should be at least 1300. Try 15, 25, 30, 40, for the first argument.
Try 100 and 700, 30 and 100, 1600 and 1700, for the second two arguments.
2.3 Arguments (scope, SinOsc, LFNoise0)

Synth.scope({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}, 0.1)

If you've worked with analog synthesizers you are used to using knobs and sliders to change the
sound. The knob or slider is labeled with numbers that represent such elements as frequency,
amplitude, or resonance. You can think of arguments as the value a knob might point to. The
difference (the advantage, I should say) to SC is that you can be very precise. Rather than fish
around for an A 440 you can enter the number exactly.

Now that we've identified the arguments for LFNoise.ar, what are the arguments for the "ar"
message that follows SinOsc? They are all enclosed in the parentheses following the SinOsc.ar:
"(LFNoise0.ar(10, 400, 800), 0, 0.3)." Notice that the first argument for this list is all the code we
were just looking at in the previous example; "LFNoise0.ar(10, 400, 800)." The second argument
is 0, and the third argument is 0.31. (Try changing the 0.3 to values between 0.1 and 0.9)

There is one more object, message, and argument list to identify. Synth is the object. "scope" is
the message, and the entire text from the first "{ to the last "}" is the first argument for the
message "scope." The second argument is 0.1. (Try changing the 0.1 to 0.01, 0.05, 0.2, 0.5, and
1. Can you determine what this value represents?) So the arguments for LFNoise0.ar are
combined with LFNoise0.ar as the first argument for SinOsc.ar, and all of that is combined
inside a function as the first argument for Synth.scope. Using a function, an object and message,
or several lines of code as a single argument is called nesting. One reason argument lists and
functions can be confusing is that a single argument can be just one number or an entire function,
array, or variable. One argument can be represented by more than one line of code, sometimes
pages of code.

Identify all the objects in examples 2.1 and 2.2. (Answers are in the back.) Identify the messages,
arrays, functions, and arguments (there are no variables—we'll cover them later).

Enclosures (parentheses, braces, brackets)

After you've programmed for a while you get used to reading nested code, but it takes a little
practice. The trick is to see where the opening parenthesis, bracket, and brace are matched with a
closing parenthesis, bracket, and brace. (From here on I'll use the term "enclosure" when
referring to any of the three types.) When the program is run the computer matches up these
enclosures and runs or evaluates the code inside the inner most matching enclosures, then works
outward to the outer lines of code. To understand the flow of a program you need to be able to
follow the nested expressions. One method for keeping it clear is to use progressive levels of
indented text as I have with example 2.2.

1
Note that you have to use a leading 0 when writing decimal fractions: 0.3, not .3.

16
Another way to quickly see where two enclosures are matched is to use command-`. SC will
shade everything within matching enclosures. To see how this works, place the cursor next to the
2 in the example 2.2 and repeatedly hit command-`.

Finally, you can double click on any enclosure and SC will shade everything between it and the
matching enclosure.

The third method for matching enclosures is built into the interface of the SC editor. You may
have noticed that when you type in a closing enclosure SC will "flash" its matching opening
enclosure. This is a good way to watch matching enclosures as you type and illustrates how
important it is to keep track of them, matching them as you code.

Now we are ready for a brief explanation of objects, messages, and arguments.

Sandwich.make

Trying to grasp the terminology of object oriented language, in my own experience, and the
experience of my students, is even more difficult because the object names are so foreign (Synth,
LFNoise0, etc.). So I'll use some familiar, but fictitious objects and messages to explain how
they work. (These examples won't work in SC!) If you are comfortable with these terms, you can
skip this section.

Suppose we had a virtual sandwich and a virtual tofu based meat substitute both of which
understood smalltalk commands. I'll call these fictitious objects Sandwich and Meat. Every
object understands a collection of messages. The messages tell the object what to do. Likewise,
there are many objects that understand any given message. The power of object oriented
languages is the way you can mix and match messages and objects.

For starters, let's assume that Sandwich understands three messages: make, cut, and bake. And
that Tofu understands three messages: bake, fry, and marinate. The syntax for sending the make
message to the Sandwich might be this (the period is called a dot, so you would say “Sandwich
dot make”):
Sandwich.make;

If you wanted the Tofu to be baked you might write:


Tofu.bake;

You may be wondering if we need to give the make message and the bake message some
arguments to describe how the sandwich is made and the tofu is baked. Actually we don't. Most
messages have default values built into the code so you can occasionally leave them off. Try
running this line, which uses no arguments in the .ar message, in SC.
2.4 Defaults (scope, ar, SinOsc)

Synth.scope({SinOsc.ar})

17
The result is a sine tone at 200 Hz, 1.0 amplitude, at 0 phase. Those are the defaults for SinOsc.
Often you are able to use one or two of the defaults, but rarely will you use a message with
defaults only. Arguments allow us to change the nature of the object, or how it acts. But before
we do that we need to know what arguments each message uses and more importantly what they
mean. To find out you use the help files. To read a help file you highlight the item you want help
with and hit command-h. Try it with all the objects in lines 1 - 10. In each of the help files are
prototypes of all the messages understood by that object, with the list of arguments the message
needs. Sandwich and Tofu might be documented this way:
Sandwich

*make(vegList, bread, meat)


*cut(angle, number)
*bake(temp, rackLevel)

Tofu

*bake(temp, baste, rackLevel)


*fry(temp, length, pan, oil)
*marinate(sauce, time)

The secret to smalltalk, and SC, is this; knowing what messages an object understands and
knowing what arguments the message uses to describe how the object acts. The arguments for
each message are different for each object. Often the same message will have different
arguments when used with a different object. (For example the bake message used with
Sandwich has two arguments, while when used with Tofu it has three. Not understanding this,
and using the same arguments with a message to different objects is a common beginner error.)
Finally it is important to understand what results you get when an object and a message are
combined.

Now that we understand what the arguments for Sandwich.make are, we could put together a
Sandwich with this mock code.
Sandwich.make([lettuce, tomato, pickle], rye, chicken)

or
Sandwich.cut(90, 1)

and
Tofu.marinate(peanut, 160)

The first line will make the Sandwich using the list of vegetables, bread, and chicken. The
second line will make one cut of the Sandwich at an angle of 90 degrees. The Tofu will be
marinated with peanut sauce for 160 minutes.

Another powerful aspect (the whole point, really) of SC and object oriented languages is nesting.
What if I wanted to make the sandwich using marinated Tofu? I would replace the variable
chicken with the entire section of Tofu code.

18
Sandwich.make([lettuce, tomato, pickle], rye, Tofu.marinate(peanut, 160))

We can continue to nest. You may have noticed in the first example the message "choose" which
was sent to the object [2, 3]. [2, 3] is an array of values and it (the array) understands the choose
message. It will return one of the values in the array. To confirm this, try running example 2.3
below 10 or so times. (In this case, run it using command-p.)
2.5 Choosing values from an array (array, choose)

[1, 3, 7, 23, 432, 4].choose;

Could a similar section of code be nested, and used to replace a static value? Yes. Take the
second argument in Tofu. (See if you can locate it on your own before I point it out.) It is
currently set to the static value 160. That is, it will always be 160, each time the program runs. If
that were replaced with [160, 100, 30].choose, then each time the code is run, one of those values
will be chosen for a marinate time. This is how it would be written out.
Sandwich.make(
[lettuce, tomato, pickle],
rye,
Tofu.marinate(peanut, [160, 100, 30].choose))

When you tell SC to "run" or evaluate the code it begins by evaluating the inner most parts, and
uses those values to run the subsequent upper layers. In English, it might read like this: Pick a
number out of the array 160, 100, and 30. Use that number as the first argument for the marinate
message to Tofu. Also use peanut as a sauce. After marinating the tofu, use it as the third
argument (meat) for a sandwich.

In addition to the inner then outer levels of code we also read top to bottom and left to right.
Programs are a combination of all three. The upper lines of code are run first. Each line is
evaluated from the left to the right, but when there are nested lines the inner most parts are
executed first.

If we wanted to bake the sandwich or the tofu, note that we would have to use different
arguments for each one. Sandwich.bake(130, 2) will bake the sandwich at 130 degrees on rack
level 2. But if I used these same arguments for the bake message to tofu (Tofu.bake(130, 2)) then
the tofu will be baked at a temperature of 130 degrees, but with a baste of 2, not a rack level of 2.
It could be that 2 is an invalid argument for baste, and even if it were valid it probably wouldn't
be the value we wanted. The SC equivalent might be the .ar message. This message is used for
quite a few Objects. The first argument is often frequency, but the second, third, and fourth
arguments are very different for SinOsc and LFPulse.

One method that ensures getting the correct argument in the correct position is to use a keyword.
The syntax for keywords is this:
Sandwich.make(
vegList: [lettuce, tomato, pickle],
bread: rye,
meat: Tofu.marinate(sauce: peanut, time: [160, 100, 30].choose))

19
Using keywords also allows the programmer the option of leaving off arguments (thus using the
defaults). Suppose the default for vegList was [lettuce, tomato, pickle]. We wouldn't need to
enter that argument, so the code could be done like this:
Sandwich.make(
bread: rye,
meat: Tofu.marinate(sauce: peanut, time: [160, 100, 30].choose))

A few additional points before moving to actual code.

It is possible to link messages. For example Sandwich.make.bake.cut would first make the
sandwich (in this case using defaults), then bake it (using defaults), then cut it (with defaults).
You can also use an Object/message nested in the same Object/message. For example, you could
write Tofu.marinate(Tofu.marinate(peanut, 60), 60). In this case, a batch of tofu will be
marinated in peanut sauce for 60 minutes, then another batch of tofu will be marinated in that
batch of marinated tofu (ick!).

Experimenting With a Patch (Just for fun)

Take another look at the first example (duplicated below as ex. 2.5 and 2.6). With a little poking
around you can probably figure out what effect each value would have on the sound. Take line 1
as the first example. Highlight the Synth object and open the help file (command-h). The
documentation for play is a few lines down. It reads *play(ugenGraphFunc, duration). This tells
us that there are two arguments for play: a ugenGraphFunc, and duration. Remember that
everything between { and } is a function, so all of the code between the opening { and closing }
is the ugenGraphFunc argument. (Try either double clicking on one of the enclosures, or click
inside the function and hit command-` to balance the enclosures.) Following that argument, after
the closing }, is a comma and the second argument: 5. The 5, therefore, is the duration argument.
It affects the duration of the playback. Try changing it to 1, 0.5, 20, etc.

In the LFNoise the arguments are freq, mul, and amp. We will discuss how to use these values in
depth later. I'll give a brief description for now. The first argument (10) represents how often a
frequency is chosen for the frequency of the SinOsc. The 400 is half the range of the values that
are being chosen. So the actual range is 800. The last value, 800, is the lowest value in the range
after you subtract the first value. For now just be sure the second value is greater (by 30 or more)
than the first. In a nutshell the LFNoise0 is generating values between 400 and 800 10 times per
second. If you increase the 400 to 500, then it generates frequencies between 300 and 900 10
times per second.

So how does this fit into the entire patch? Look at the documentation for SinOsc and you will see
that the first argument is "freq." The entire LFNoise0.ar(etc.) is being used as the freq argument
in the SinOsc. To confirm this, try replacing the LFNoise0.ar(10, 400, 800.) with a static value
such as 300. In this case you will hear a single pitch; 300 Hz.

Look at the help documentation for the second patch by selecting items and using command-h.
See if you can make sense of the arguments in relation to the actual sound. Try changing the
values but first try to predict what effect the change you make will have before you run the code.
I've bolded those arguments that you can change safely and have an interesting effect on the

20
patch. Remember that you have to select all the lines in the second patch, from "Synth.scope", to
the final ");".
2.6 experiment (LFNoise0, SinOsc, RLPF, LFSaw, LFNoise1, choose)

Synth.play({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}, 5)

2.7 experiment

Synth.scope(
{
RLPF.ar(
LFSaw.ar([8, 12], 0.2),
LFNoise1.ar([2, 3].choose, 1500, 1600),
0.05
)
}, 0.05
);
//end patch

The LFSaw in the second patch uses an array ([8, 12]) as the freq argument. The first value is
used for the left channel, the other for the right (more on this later). The LFNoise1 is generating
values between 100 and 3100 and is used as a cutoff in the RLPF (resonant low pass filter). The
LFSaw is the input frequency. If you change the 1500 and 1600 be sure the second value (1600)
is larger than the first (1500).

Feel free to experiment. Here is one of the reasons I like to teach with SC. You can experiment
safely. When working with the old analog synthesizers entering incorrect values or patching
errors (e.g. outs to outs) could actually damage the equipment. But in SC, in general you can try
anything without damaging the equipment. There is indeed a danger of crashing the machine or
getting error messages or possibly damaging your ears if the sound is too loud. But you can
experiment at will.

If you make changes that result in an interesting sound you can easily document or save the
patch. To do this choose "save as" and name the file. Alternately you can copy and paste the
lines of code into a file where you can collect these "patches" for your portfolio. This is how you
should proceed for the rest of the semester. Try the patches I give you. If you come up with
something interesting, save it, comment it (this will be covered later), and save it as a patch you
will use later.

Electronic music has always been fun for me. I hope this text will be informative and enjoyable.
In order to avoid getting bored with the technical stuff I will try to include at least one interesting
patch at the end of each chapter. Don't worry about the parts we haven't discussed (or,
alternatively, look up the help file). Just have fun.

21
Section I Digital Synthesis

3. The Science of Sound

From here on I suggest you follow a text on the basics of sound and Synthesis. I have used An
Introduction to the Creation of Electroacoustic Music by Samual Pellman. I won't go into as
much detail about the examples you might encounter in a synthesis text. My goal here is to
explain how to do generate such examples in SC.

Every synthesizer you will ever work with whether avant-garde generative computer installations
or a Casio organ from a local Target will use these terms and techniques; frequency, amplitude,
envelopes, timbre, triggers, and occasionally phase.

Frequency

3 Assignment:
Use the patches below and change the arguments for frequency, amplitude, and phase.
Change the frequency (200) to values between 0.1 and 20000. (Use patch 2 for values
between 0.1 and 60). Change the amplitude (0.3) to values between 0.1 and 2.0. Change the
phase to values between 0 and 6.28. Then answer the following for patch 1. The answers
may be different for each patch.
a) Identify objects and messages.
b) What are the arguments for the .ar message in each?
c) What are the arguments for the scope message?
d) What are the highest and lowest frequencies you can hear?
e) At what low frequency can you recognize pitch? At what high frequency do you stop
recognizing pitch?
f) What values for amplitude would you consider pp, mp, mf, and f?
g) What happens with values above 1.0?
h) Try different values for phase. How does it change the sound (character) of the wave?

Patch 1:

Synth.scope({SinOsc.ar(200, 0, 0.3)}, 0.1);

Patch 2: For low frequencies

Synth.scope({LFSaw.ar(60, 0.3)}, 0.1);

The first three topics covered by a synthesis text are frequency, amplitude, and phase. Frequency
is associated with pitch, and amplitude with volume. Phase comes into play when an Oscillator is
used as a control. Phase comes into play when several frequencies interact, but with a single
wave in audio range it rarely affects the sound.

22
All three of these elements of sound can be easily demonstrated in SC using a single object and
message: SinOsc.ar. The arguments for SinOsc.ar are freq, phase, mul, and add. The first three
arguments correspond with pitch, phase, and amplitude. (The add has of no practical use in this
example.)

Type the code below into and open SC window and run it. The scope message uses the function
containing the SinOsc and plays the graph it generates. It also shows a graphic representation of
the sound in a new window. The peaks represent the speaker moving out and the valleys are
when the speaker is moving in. The graph you see in the scope is an actual representation of how
the speakers move. As the speakers move they compress and rarify the air, creating sound waves.
3.1 SinOsc

Synth.scope({SinOsc.ar(440, 0, 0.4, 0)}, 0.1)

The first argument for scope is the function enclosing the SinOsc. The second argument for
scope is 0.1 and represents the size of the scope window. It is set to 0.1 meaning the window
shows 1/100th of a second. The first argument for SinOsc.ar is freq, set to 440, meaning the
SinOsc will generate a tone at 440 Hz (A). If you set the frequency to 100 you should see 10
waves in the scope.

Try changing the frequency (first argument) to values between 10 and 20000. Keep in mind that
if you can't hear a high or low frequency it may be the fault of the equipment, not your ears. It
could very well be that you can hear 20,000 Hz, but the speakers you are using may not be able
to reproduce 20,000 Hz. This is why you see the term frequency response in literature regarding
home audio equipment. A wider frequency response means better equipment.

What happens if you try values below 10?

The next example uses a different type of wave. Since the SinOsc generates a very smooth
period you can't hear the change at very low frequencies. The speakers may be compressing the
air, but we don't hear the change. (Though you might be able to feel it.) In the example below
instead of SinOsc we use LFSaw which generates a Saw wave shape. (We'll discuss wave shapes
later.) LF stands for low frequency. The LFSaw ugen is designed for frequencies below 60 hz.
Try changing the frequency argument to values between 0.1 and 60. For values below 1.0 you
might want to change the argument for the size of the window. Change it to 1 to represent one
second, change it to 10 to show 10 seconds at a time. (Be patient; the scope may take some time
to update.)
3.2 LFSaw

Synth.scope({LFSaw.ar(60, 0.3)}, 0.1)

Amplitude

We can use the same example to examine amplitude. There are a number of ways you can
change the amplitude in an SC patch. But in this example the most logical method is to change

23
the mul argument in the SinOsc.ar. Remember the help file for the SinOsc showed:
SinOsc.ar(freq, phase, mul, add). The mul argument scales or multiplies the output. The default
value for mul is 1.0. This means the sound wave will be 1 at its highest point and -1 at its lowest
point. You may have noticed with the last patch that in the scope there appears a very light 1 and
-1 at the top and bottom of the screen. Values between 1 and -1 are maximum amplitude. This
seems a little loud for me. (It's maximum. I think this is just a reaction from working in analog
formats for so long. One learns to avoid maximum values.) Before you start this example you
might want to turn the audio on the computer down, or remove the headsets.
3.3 amp

Synth.scope({SinOsc.ar(200, 0, 1.0)})

Try changing the mul to values between 0.1 and 0.9.

The next example uses a mul of 2 so the output moves between –2 and 2. (What would the
output be if you set this value to 500?) Values of 2 and -2 are far too high and inappropriate for
volume. Turn the volume low on the computer before you try this example. This is an illustration
of output saturation, clipping (the tops of the waves are clipped), or distortion. In general
distortion is a bad thing. But Rock and Roll was built on distorted guitars. Originally the
distortion was the result of driving amplifiers beyond their capacity in an attempt to meet the
demands of large halls and the power of rock. Eventually it became a signature sound and
components were built to electronically imitate what was first avoided by engineers.
3.4 distortion

Synth.scope({SinOsc.ar(200, 0, 2)})

If you want a distorted sound there are better ways to achieve this affect without using excessive
values for amplitude. It is very poor style to go beyond 1.0. Studio engineers who have a
background in analog audio cringe anytime you get anywhere near 1.0. In some systems it's
actually dangerous to the equipment to play sounds that loud. What you are doing is going
beyond the equipment's capability to reproduce the sound. In digital realms it is not as
dangerous, (you won't damage the program), but you compromise the sound. More importantly
you give away your lack of experience. A clipped signal is telling.

Phase

I only use phase occasionally in composition. In most cases the quality of sound is not affected
by phase, but it's an important concept in understanding how timbre, phase cancellation, and
constructive or destructive interference works. Not many oscillators in SC have a phase
argument, but SinOsc does. Here is the line we started with.
3.5 phase

Synth.scope({SinOsc.ar(100, 0, 0.7)}, 0.01)

The second argument (0) is phase. The first window below shows a phase of 0 with frequency at
100 and duration (window length) at 0.01. The second window shows a wave that is 180 degrees

24
out of phase. What value is required in the phase argument to achieve this effect? (The value is
between 0.1 and 4.0.)

Do changes in phase change the sound?

There is an easy way to plot a signal to a window without using the synthesizer to play it. It is
done using the message "plot" rather than play.
3.6 plot

Synth.plot({SinOsc.ar(100, 0, 0.7)}, 0.01);

You may have discovered with the graphs above and with your experimentation that 3.0 will take
it about 180 degrees out of phase. This is because 3.0 is close to pi. SC understands pi, so you
can use it in the code at the phase argument. 2pi will take it 360 out of phase, or return it to the
original position.
3.7 pi and phase

{SinOsc.ar(100, pi)}.plot

This next example is a bit of a kluge but it is an interesting way to demonstrate phase visually
(this will have no effect on the sound). The window size set to 1/100th of a second so a value of
200 is in sync with the window. If you add 0.1 the phase slowly marches across the screen. At 5
seconds the wave will be 180 degrees out of phase, at 10 seconds it will be back in phase (with
the window).
3.8 phase and scope

Synth.scope({SinOsc.ar(200.1, mul: 0.5)}, 0.01)

Just For Fun

The patch below uses three ugens to generate sound: an LFNoise1, a SinOsc and an LFSaw. All
of them pretty much deal with frequency only to be consistent with what we just studied. The
SinOsc is where the actual audio frequency is generated. The frequency for SinOsc is determined
by the LFNoise and LFSaw. They are linked in a rather complicated way. It would be easy to
enter values that would result in negative output (not appropriate for frequency), so they both are
wrapped in an abs() function to protect against this. You can experiment at will but the range of

25
sweep should be greater than the range of overall wandering. That entire section is plugged into a
Comb filter. The max delay should always be greater than the actual delay. The decay time is
how long the echo resonates. Have fun.
3.9 Just for fun (CombN, SinOsc, abs, LFNoise1, LFSaw, array)

Synth.scope({
CombN.ar( //An echo ugen
SinOsc.ar( //Sine wave osc
abs( //This function protects against negative values
LFNoise1.kr(
0.4, //frequency overall wandering
800, //range of overall wandering
LFSaw.kr(
[3.5, 3.51], //left and right channel
//of the individual sweeps
300, //width of sweep
1200 //range of sweep
)
)
),
0,
0.2 //volume, stay below 0.2
),
0.6, //max delay
0.27, //actual delay, should always be less than max
4 //decay time
)
})

//end patch

26
4. Keyword Assignment, MouseX.kr and MouseY.kr, Linear and
Exponential values

Keyword Assignment

4 Assignment:
a) Rewrite the patch below using keywords.

Synth.scope(
{
RLPF.ar(
LFSaw.ar([8, 12], 0.2),
LFNoise1.ar([2, 3].choose, 1500, 1600),
0.05
)
}, 0.05
)

b) Using MouseX.kr and MouseY.kr as arguments for mul and freq of a SinOsc, create a
patch that emulates the classic Theremin.

With the examples above we used most of the arguments in each message. You may have
noticed that we often left off the add argument. Objects and messages are setup with defaults that
are used if no value is specified. With the previous examples the default value was 0, or no
change, and that's what we needed. Below there are no arguments for the SinOsc, but it still
works. It uses default values of 200, 0, 1, and 0.
4.1 Defaults

Synth.scope({SinOsc.ar})

You might have also noticed that we entered a 0 for the phase argument even though the default
value for phase is 0. This is because the arguments have to be entered in the correct order. Phase
is the second argument and mul is the third. We had to enter the 0 as a sort of place marker so
that the mul would be in the correct position.

The defaults are handy because you only have to enter arguments where the default value needs
to be changed. Here is a problem; if the arguments have to be in order and there are 10
arguments, what do you do if you want to change the only the sixth value? You would have to
enter 1-5 even though you aren't changing them from the defaults, just so the sixth argument will
be in the correct position. This is not only cumbersome to type in, but it is error prone. It is also
difficult to remember what the arguments are and what the order is.

27
The technique to get around this is keyword assignment. Using keywords you can just specify
the name of the argument you are changing followed by the value. Keywords are the names of
each argument exactly as they are given in the documentation (help files2). The documentation
for SinOsc.ar is "SinOsc.ar(freq, phase, mul, add)." The keywords are freq, phase, mul, and add.
The syntax for using keywords is the keyword followed by a colon, then the value. Using
keywords not only allows you to enter a single value, but to mix the order of the arguments. Here
are several versions of the SinOsc example written using keywords. All of them have precisely
the same meaning. (In the last example I also use keywords for the scope arguments.)
4.2 keywords, indents

Synth.scope({SinOsc.ar(freq: 440, phase: 0, mul: 0.4, add: 0)}, 0.1);

Synth.scope({SinOsc.ar(phase: 0, freq: 440, add: 0, mul: 0.4)}, 0.1);

Synth.scope({SinOsc.ar(freq: 440, mul: 0.4)}, 0.1);

Synth.scope(
ugenGraphFunc: {
SinOsc.ar(
freq: 440,
phase: 0,
mul: 0.4,
add: 0)},
duration: 0.1);
//end patch

Another good reason for using keywords is that your code becomes self-documenting. In the first
examples it may have been hard to see what each of the numbers in the argument list meant. But
in these examples it is clear what the freq and mul arguments are.

I've also spread the last example out. Up until now each patch has been contained on a single
line. It is more common to see code spread out over a number of lines. It's actually easier to read.
The way you use white space changes a little with each author, but a common practice is to
indent sections of code to indicate which level of enclosures they belong to. In the example
above the object Synth is flush left. It is the only item on that level because everything else is
enclosed in the scope enclosures. The ugenGraphFunc and duration are on the same level
because they are both arguments for the scope message. I begin a new line for SinOsc, and new
level of indentation because it is contained in the function. There is only one item in this
function, but it is more common to have a number of lines (even pages of lines) inside a function.
The freq, phase, mul, and add are also all on the same level of indentation. Look again at the list
of command keys in the basics section. This is where command-] and command-[ come in
handy. They increase the level of indentation to the left or right. You can select several lines and
increase their indentation at once.

2
On rereading I realize this isn't true. The arguments in the help file are usually correct, but I've encountered two
examples where the help files were wrong. The most accurate source for argument names is command-y for
messages and command-j for Objects. These commands open the source code.

28
MouseX.kr and MouseY.kr

In the first example you saw how we could use an entire ugen as one of the arguments in another
ugen. We will see many more examples when we look at controls and control sources. But for
now I would like to illustrate two ugens that are very useful when performing experiments on
argument values. In the examples above we changed each value, then ran the code, then changed
the value and ran the example again. You may have wished you could attach some type of knob
to the patch so that you could try a range of values at once. This is what MouseX.kr and
MouseY.kr will do. They link mouse movement and position to values that can be used in the
patch. The first three arguments are: MouseX.kr(minval, maxval, warp). They represent
minimum value, maximum value, and warp.

These ugens can be used to replace a static value with a range of values that change in real time
in relation to the position of the mouse on the screen. As an illustration, try the first patch
reproduced below with a MouseX in place of the first argument for LFNoise03.
4.3 MouseX (LFNoise0, SinOsc, mul)

Synth.scope({SinOsc.ar(LFNoise0.ar(MouseX.kr(1, 50), 500, 600), mul: 0.5)});

Much easier than changing, trying, changing, trying, etc.

In the example below MouseY (Y axis is top to bottom) is used to control amplitude. The minval
is 0.9 and maxval is 0.0. These may seem backwards, but the minimum "position" for the mouse
is actually the top of the screen, the maximum is the bottom. This would seem backwards for
amplitude, so they are reversed.
4.4 MouseY

Synth.scope({SinOsc.ar(440, mul: MouseY.kr(0.9, 0))});

The next example adds a MouseX to control frequency. The minimum value is A 220 and the
maximum is two octaves higher, or 880. Since it is two octaves you might be able to play a tune
with this patch.
4.5 MouseX controlling frequency

Synth.scope({SinOsc.ar(MouseX.kr(220, 880), mul: 0.3)});

The reason it is difficult to pick out a melody is that the warp is set to linear (the default). A
linear warp means the numeric values change with the same proportion as the motion and
position of the mouse. But the increase of frequency over successive octaves is not linear, but
exponential. The amount of change between one of the low octaves (such as 110 to 220) is
smaller (110) in comparison to a higher octave (1760 to 3520; a difference of 1760). With a

3
I've introduced a number of control sources into patches. Voltage, or machine control is the goal of this section.
More control sources will be discussed in a later chapter.

29
linear warp and a range of 220 to 880 the middle of the screen will be 550. But if we want the
screen to visually represent a musical scale the middle should be 440, the left half for pitches
between 220 and 440 (a difference of 220) and the right half should be 440 to 880 (a difference
of 440). This can be changed using the warp. The warp value is set using a symbol. A symbol is
a word in single quotes. Try playing a melody with the following adjustments.
4.6 exponential change

Synth.scope({SinOsc.ar(MouseX.kr(220, 880, 'exponential'), mul: 0.3)});

As a general rule you will want to use an exponential warp when dealing with frequency.

I introduce MouseX and Y as a tool for trying values in patches and experimenting in real time,
but it is reminiscent of one of the earliest (and very successful) electronic instruments; the
Theremin. The Theremin controlled pitch and amplitude by proximity of the performer's hand to
an antenna and protruding loop. Performers "played" the Theremin by moving their hands closer
or farther away from the Theremin.

Use the components we have discussed to create a virtual Theremin where MouseY controls
amplitude and MouseX controls frequency. To imitate that classic sci-fi sound you should try to
play it with a heavy vibrato.

Just For Fun

The patch below uses variables, which are discussed in the next chapter. As with the previous
patch I want to point out that this patch is all about frequency. It uses only three SinOsc but can
generate very rich sounds because of the way they are linked together. The patch uses the
OverlapTexture ugen, which spawns one event after another.

There are two mouse controls and the arguments for the patch are taken from the mouse position,
but they are not set up for continuous change. That is to say the change does not occur in precise
relation to the movement of the mouse. Rather, the .poll message is used to take a "snapshot" of
the mouse position at the time a new event begins. So the mouse should be used to "explore" this
sonic labyrinth; move the mouse to a position then listen for a while, then move it. Try the four
corners first, then some spots in the middle. If you want the mouse to continuously change the
sound, simply remove any or all .poll messages.

I don't think I would change much on this one. Maybe you could change the high and low
frequencies for the MouseX and MouseY, or the frequency of the inner most SinOsc (now set to
rrand(0.1, 25.0)—try changing it to a single static value, like 8). Just sit back and listen.
4.7 Just for fun (MouseX and Y, OverlapTexture, Pan2)

Synth.scope({
var control1, control2;
control1 = MouseX.kr(60, 1000, 'exponential');
control2 = MouseY.kr(60, 5000, 'exponential');
OverlapTexture.ar({
Pan2.ar(
in: SinOsc.ar(

30
freq: SinOsc.ar(
freq: SinOsc.ar(
freq: rrand(0.1, 20.0),
mul: control1.poll, add: 1.1*control1.poll
),
mul: control2.poll, add: 1.1*control2.poll
),
mul: MouseY.kr(0.12, 0.05)
),
pos: 1.0.rand
)},
sustainTime: 4, //length of each event
transitionTime: 2, //overlap time
density: 3, //number of symultaneous events
numChannels: 2)})
//end patch

31
5. Variables, Comments, Triggers

Variables and Comments

5 Assignment:
a) Begin with the patch below. Change the values in the pitchCollection array (they
represent midi numbers) to values that will reproduce the first few measures of Bach's WTK
book I prelude in C major (or "Frere Jacques"). (You can repeat notes or enter a 0 value for a
rest.)

var pitchCollection;

pitchCollection = [60, 45, 67, 82, 55, 66, 62].midicps;

Synth.scope({
SinOsc.ar(
Sequencer.kr(
`pitchCollection,
Impulse.kr(8) //Trigger frequency
),
mul: 0.3
)
})

b) Convert the following to and from beats per minute and Hz.
2 Hz = ____ bpm
0.2 Hz = ____ bpm
3 Hz = ____ bpm
0.125 Hz = ____ bpm
120 bpm = ____Hz
100 bpm = ____ Hz
50 bpm = ____ Hz
200 bpm = ____ Hz

It is often useful in code to define and use your own terms and functions. Variables are used for
this purpose. Remember that variable names can be anything, but must begin with a lower case
letter (no numbers) and they must be contiguous. It is tempting to use short cryptic names such
as tri, beg, pfun, freq, or even single letters such as a, b, c. More descriptive terms take more time
but are better in the long run; firstPitch, legalDurations, maximumAttack, etc.

Variables are "declared" (identified to the program) with the syntax "var" followed by a list of
variables, each separated by commas, the list terminated by a semicolon. Variables are assigned
values (the value is stored in the memory location associated with the variable) using the "="

32
sign followed by the value you want to store and a semicolon. The variable can then be used
throughout the program.

The code below also uses comments. Comments are identified using "//" followed by the
comment. The compiler ignores everything from the "//" to the end of the line.

To run the second example you must select all the lines beginning with "//First patch . ." all the
way to the bottom "})".
5.1 Variable declaration, assignment, and comments

//First patch

Synth.play({SinOsc.ar(LFNoise0.ar(10, 400, 800), 0, 0.3)}, 5)

//First patch with variables

Synth.play({

var freqRate, freqRange, lowValue;


freqRate = 10; //rate at which new values are chosen
freqRange = 1200; //the range of frequencies chosen
lowValue = 60; //the lowest frequency

SinOsc.ar(
LFNoise0.ar(freqRate, freqRange/2, freqRange + lowValue),
0.3)
})

The math that is performed on the variables in the LFNoise0 will be clearer later. One reason for
using variables is that they clarify the code. Earlier this example required some explanation of
what values were appropriate for arguments in LFNoise0. Here it is easier to understand because
they are identified and assigned before being used in code. This way the programmer can explain
not only where they are used but also what they mean. The meaning is indicated by the name of
the variable. These variables are declared and assigned inside the ugenGraphFunc, which is the
first argument for Synth.scope.

Variables can be used to clarify code, and the can be used for consistency. One variable may be
used hundreds of times in a program or patch. Using a variable ensures that when a change is
made all values are changed in the patch.

Another important use for variables is to link arguments. They can be thought of as patch chords
that can be plugged into several related inputs or arguments. Suppose you created a patch that
plays a range of frequencies. You might want to reduce the volume when higher frequencies are
played and increase the volume with lower frequencies. You might also want the cutoff for a
filter to change in proportion to the frequency. You might want the decay rate of an envelope to
decrease with higher frequencies (which is what happens on many instruments). Without a
common variable you would have to change them all by hand:

33
freq = 100 //low frequency
amp = 0.7 //higher amp
cut = 600 //filter cutoff
rate = 2.0 //decay rate

freq = 1200 //higher frequency


amp = 0.3 //lower amplitude
cut = 7200 //higher cutoff
rate = 0.1 //decay rate

With a single variable, they can be linked using expressions:

var freq;

freq = 100 //frequency may change


amp = freq/1200 //amp changes as freq changes
cut = freq*600 //cutoff changes as freq changes
rate = 100/freq //decay changes as freq changes

Triggers, Gates

All the patches we've tried so far have been steady state sound. The sound may have changed,
but it was always running. Musical events have starting points, ending points, and envelopes
(described in the next chapter). An event on an instrument is set of by plucking a guitar string or
striking a cymbal as a trigger. Synthesizer events are set in motion by a trigger or gate. When
you press the key of a consumer synthesizer the first piece of information that is sent to the
processor is a trigger to begin that event. Triggers can come from a variety of sources but they
are usually split second positive pulses from a unit generator. Below are examples of two
triggers. The first is a periodic trigger, the second is a random trigger. The first example plots the
output the second plays it. But these were not really intended for audio output. They are used to
trigger another event. They both take one argument. For Impulse4 this number represents the
number of pulses per second. For Dust, it is the average number of pulses, or average density.
5.2 Trigger (Impulse, Dust)

Synth.plot({Impulse.ar(3, mul: 0.5)}, 5);

Synth.plot({Dust.ar(5)});

Synth.scope({Impulse.ar(4, mul: 0.5)}, 1)

Synth.scope({Dust.ar(5)}, 1);

4
In the plot example the message .ar is used. In the sequencer example .kr must be used. They stand for audio rate
and control (kontrol) rate. The .kr message is appropriate for low frequency ugens, .ar is appropriate for audio rate
frequencies.

34
Triggers are such an integral part of synthesis that it is hard to do a simple example. So in the
patch below ignore the things that are unclear and focus on the trigger. In this example trigger is
the second argument of a sequencer. It simply tells the sequencer to move to the next value. Try
replacing the Impulse.kr with Dust.kr. Try changing the array of midi pitches (in bold). Also
change the frequency of the Impulse.kr.
5.3 Sequencer (array, midicps, Sequencer, Impulse)

var pitchCollection;

pitchCollection = [60, 45, 67, 82, 55, 66, 62].midicps;

Synth.scope({
SinOsc.ar(
Sequencer.kr(
`pitchCollection,
Impulse.kr(8) //Trigger frequency
),
mul: 0.3
)
})

A gate is a timed trigger. (A trigger is a gate with no duration.)5 A gate is used to describe events
that include a sustained portion. A single trigger can be used for percussive instruments like
pianos, guitars, and marimba, where the amount of decay is out of the control of the performer
once the key is struck. The decay is fixed and is essentially the same for every note. A gate is
required for envelopes that have a duration that can be controlled by the performer. An organ
would fall into this category; it will play for as long as you hold the key down. When you release
the key, it goes through the final decay. The performer is creating a gate by holding down the
key. The length of the gate is the amount of time the key is held down.

As of this writing I believe the way SC implements envelopes and the way we will use them for
synthesis makes the distinction between a gate and trigger obsolete. We will use a trigger to
engage an envelope, and the Env ugen to describe the shape of the envelope. In some cases the
envelope will have a sustain portion, in some cases it will not.

Just For Fun

5.4 Just for Fun (Dust, Sequencer, LFNoise1, OverlapTexture, CombN, RLPF,
Pan2)
(//line 1
Synth.scope({
var outMix, frequency, rate, seqTrigRate, foldFunc, foldFuncRate,
foldFuncWidth, foldFuncFocus, delayFunction, filterCutoff,
delayMax, delayMin, echoDecay, minFreq, maxFreq, freqRate; //line 5

5
The author of SC uses the terms trigger, timed trigger, and gate. What most texts describe as a gate is a timed
trigger in SuperCollider. The gate ugen in SC is slightly different from what the classic synthesizer gate and will not
be covered in this text.

35
minFreq = 20;
maxFreq = 40;
freqRate = 1/3;
foldFuncWidth = 0.05; //line 10
foldFuncFocus = 0.001;
foldFuncRate = 0.5;
filterCutoff = 1000;
delayMax = 0.4;
delayMin = 0.01; //line 15
echoDecay = 4;
seqTrigRate = Dust.kr(1/3); //rate trigger; one in three seconds
rate = Sequencer.kr(
{[1/4, 3, 6, 3/4, 14, 20].choose}, //rates
seqTrigRate //line 20
); //trigger for new rate
delayFunction = LFNoise1.kr(
rate,
mul: delayMax/2, add: delayMax/2 + delayMin);
//line 25
frequency = LFNoise1.kr(
freq: freqRate,
mul: (maxFreq - minFreq)/2,
add: ((maxFreq - minFreq)/2) + minFreq);
//line 30
foldFunc = LFNoise1.kr(
freq: foldFuncRate,
mul: foldFuncWidth,
add: foldFuncWidth + foldFuncFocus
); //line 35

OverlapTexture.ar({
outMix = SinOsc.ar(frequency, mul: 0.8).fold2(foldFunc);
outMix = CombN.ar(outMix, delayMax,
delayFunction, echoDecay, add: outMix); //line 40
outMix = RLPF.ar(outMix, filterCutoff);
outMix = Pan2.ar(outMix, LFNoise1.kr(rate));
outMix;
},
sustainTime: 3, //length of each event, line 45
transitionTime: 1, //overlap time
density: 2, //number of symultaneous events
numChannels: 2)
}, 0.1)
)
//end patch

Ok, this is a pretty complicated patch. But there are some things that were covered in the chapter.
Notice that all of the variables are declared and assigned at the top. It may look more
complicated, but it is easier to see how each value can change the patch if it is assigned to a
variable. This same example could be written with about 12 lines of code and no variables. But it
would be much more difficult to follow. The same patch is duplicated below without variables or
comments.

Notice that the variable "rate" is used two places in the patch; line 23 and 42. By using the same
variable in both places two aspects of the sound (the pan and a filter, or delay rate) are linked

36
together. Try replacing both or either freqRate on line 27, foldFuncRate on line 32 to "rate." That
way four elements of the sound will be linked together.

Notice also the comments. One technique for isolating sections of the sound and to debug is to
"comment out" lines of code. That is place a "//" in front of a line so that it is no longer part of
the patch. It will be ignored by the compiler, but it will still be there so don't loose the
information. Try "commenting out" lines 41 and/or 42. Try commenting out lines 39 and 40. You
have to do both of these together, or you will get an error.

Have fun.
5.5 Compressed fun

(
Synth.scope({
var outMix, rate;
rate = Sequencer.kr({[1/4, 3, 6, 3/4, 14, 20].choose}, Dust.kr(1/3));
OverlapTexture.ar({
outMix = SinOsc.ar( LFNoise1.kr(1/3, 10, 30),
mul: 0.8).fold2( LFNoise1.kr(0.5, 0.05, 0.051));
Pan2.ar(RLPF.ar(
CombN.ar(outMix, 0.5, LFNoise1.kr(rate, 0.2, 0.21), 4, add: outMix),
1000),
LFNoise1.kr(rate));
}, 3, 1, 2, 2)
}, 0.1)
)
//end patch

37
6. Envelopes, Reciprocals

Envelopes

6 Assignment
a) Begin with the patch below and create a second envelope and envelope generator to
control pitch. (You can use the same trigger for both envelopes, or a different trigger.) Place
it in the freq argument for SinOsc (replace the 400). It will look a lot like the envelope that
is controlling amplitude, but the values must be scaled to a level appropriate for pitch (e.g.
400 to 1200).

Synth.scope(
{
var env, trig;

trig = Impulse.kr(1);
env = Env.linen(0.1, 0.2, 0.3, 0.5);

SinOsc.ar(400, mul: EnvGen.kr(env, gate: trig))


}, 1)

b) Continue with the previous assignment (reproduced below). Add a variable for duration
and link the trigger and envelope components to the duration. For example, if the duration is
1 second then the trigger should be 1 time per second, the attack might be 0.1, the sustain
0.3, and decay 0.5. But if the duration is changed to 2 seconds, then the trigger should be 0.5
seconds, the attack 0.1, the sustain 0.6, and decay 1.0. If duration is set to 0.5 then trigger
would be 2, attack 0.05, sustain 0.15, and decay 0.25. (It’s a story problem!)

var pitchCollection, env, trig;

Synth.scope({

pitchCollection = [60, 45, 67, 82, 55, 66, 62].midicps;


trig = Impulse.kr(2);

env = Env.linen(0.01, 0.1, 0.3, 0.5);


SinOsc.ar(
Sequencer.kr(
`pitchCollection,
trig //Trigger frequency
),
mul: EnvGen.kr(env, gate: trig)
)
})

38
Envelopes6 describe a single event control source that changes over time. Typically they are used
to control amplitude, but they can be applied to any aspect of sound.

We all know the amplitude of a musical event decreases over time (this is called decay), but
there is also an attack, which describes the beginning of a sound. A marimba has a sharp attack.
An organ has a softer attack. A violin may have a short or a very long gradual attack. A piano
has a sharp attack, but not as sharp as a wood block. All presets on all synthesizers use envelopes
to describe the change of volume over time. You may be surprised to discover how easy it is to
distinguish between very small variations in attack time.

The most common terms used to describe envelopes are attack, decay, sustain, and release or
ADSR. Simpler envelopes may include only AR, or attack and release. Below is a graph showing
how each of these terms relate to an envelope.

Attack Decay Sustain Release

There are two types of envelopes. Fixed duration and sustain envelopes. Fixed duration
envelopes usually have an attack and a release. Sustain envelopes have a sustain portion that
corresponds with the length of a gate. (The gate being defined by how long you hold down the
key a synthesizer keyboard.) Below are examples of three fixed and one sustain envelope. Each
is set to plot to the screen. We'll insert them in a patch later. Each is followed by the argument
list. Level refers to the peak amplitude of the envelope. All other values refer to time in seconds.
6.1 Envelopes plotted (plot, perc, triangle, sine, linen, Env)

Env.perc(0.1, 0.5, 0.7).plot; //attack, release, level

Env.triangle(2, 0.5).plot; //length, level

Env.sine(3, 0.5).plot; //length, level

Env.linen(0.1, 0.6, 0.3, 0.7).plot; //attack, sustain, release, level

With even the most sophisticated synthesizers you rarely have envelopes more complicated than
an ADSR. This is generally seen as a limitation since many real instruments (e.g. violin, voice,

6
The name comes from a time when samples were stored on sections of recording tape and kept in envelopes. When
you wanted a sound you would select an envelope.

39
and saxophone) are capable of much more complex variations in amplitude. SC allows for this
level of complexity with the .new message. The first two arguments are arrays. The first array
contains levels, the second array contains times.
6.2 Complex Envelope (Env.new)

Env.new(
[0.01, 1.0, 0.6, 0.8, 0.6, 0.75, 0.4, 0.6, 0.3, 0.01]*0.5,
[1, 0.5, 0.5, 0.7, 0.3, 0.6, 0.5, 0.8, 0.4]).plot

Once you have described an envelope using Env you must place it in the EnvGen ugen with
some type of trigger or gate. In many patches the trigger is (magically) supplied by components
of the patch. But in the example below the trigger is supplied by an Impulse object. I use a
variable to store the envelope and trigger. They are declared and assigned inside the
ugenGraphFunc, which is the first argument for Synth.scope. These two variables are then placed
in the ugen EnvGen as the first (env) and seventh (gate7) argument. Since gate is the seventh
argument the keyword protocol is used. That entire combination is used as the mul argument for
the SinOsc. Instead of a static value (e.g. 0.3) for mul, the EnvGen will supply a continuous
stream of values that will describe the shape of the envelope.
6.3 Envelope and Envelope Generator controlling amplitude (Impulse, linen,
EnvGen, SinOsc)

Synth.scope(
{
var env, trig;

trig = Impulse.kr(1);
env = Env.linen(0.1, 0.2, 0.3, 0.5);

SinOsc.ar(400, mul: EnvGen.kr(env, gate: trig))


}, 1)

The arguments for linen are attack, sustain, decay, and level. Try changing each of these. The
argument for the size of the scope window has been set to one second so that you can see the
shape of the envelope. The values in linen represent real time. So if they total more than one
second the envelopes will overlap because the trigger rate is one time per second.

The default level for Env and EnvGen is 1. The EnvGen begins at 0, rises to 1, then decays back
to 0. This is appropriate for amplitude because 1 is a maximum for volume. But for the
assignment the envelope generator must create values in the audio range, such as 300 to 1000.
The second and third arguments (mul and add) in EnvGen can be used to scale and offset the
output to values appropriate for frequency. By changing mul to, for example, 400, the EnvGen
will begin at 0, rise to 400 at its peak, then fall back to 0. This is still not useful for frequency

7
Remember a trigger is a gate with no duration, so a trigger can be used as the argument for gate.

40
because it begins and stops at 0. If you set add to 200 then the EnvGen will begin at 200, rise to
600, then fall back to 200. The following envelope would be appropriate for a frequency control.
6.4 Scaled envelope for frequency (EnvGen, Env, linen, mul, add)

EnvGen.kr(Env.linen(1.5, 2.5, 1.2, 0.3), mul: 1000, add: 500)

The second assignment requires you to link different values in an envelope to a duration and a
trigger. First try the envelope components.

In the previous examples the envelopes have all been described in terms of hundredths of a
second. But they can also be used to describe a percentage of any duration. Consider these lines
of code.
6.5 Duration, attack, decay

var dur, att, dec;

dur = 10;
att = dur*0.1;
dec = dur*0.9;

As duration changes, attack and decay (att and dec) will change accordingly. They will always
be a percentage of dur. This ensures that the attack and decay do not exceed (and also use the full
length of) the duration. You don't necessarily have to use up the entire duration, and you can
certainly exceed the duration with the length of attack and decay, if that is the effect you want.
You may also want to have the same length for attack and decay regardless of the duration. But
for this assignment, they must match.

The second part of the assignment requires you to link the trigger to the duration, such that a
trigger happens at the end of each duration. Or that the trigger frequency matches the duration. I
have to confess this assignment is convoluted to force the issue of reciprocals, which are
essential in dealing with both audio and low frequencies.

Why are reciprocals important in music? Music is made up of events in time. In SC we describe
time in seconds. But we can represent events either by their duration (how many seconds it lasts)
or the number of events in a second (frequency).

It gets confusing when we describe a duration (something we typically think of as longer than a
second) as a frequency. For example, if a duration is 10 seconds long, what is its frequency? That
is to say, how many times will that duration happen in a second? (It's confusing because there are
fewer than one that will happen in one second. But it still can be expressed as a frequency.) It is
also confusing if we describe frequency in terms of duration. If a frequency is 100 Hz what is the
duration of each wave?

Frequency and duration are reciprocal. That is, d = 1/f and f = 1/d. A frequency of 100 has a
duration of 1/100th of a second. A duration of 10 has a frequency of 1/10th of a second or 0.1.

41
I'll let you review how to calculate reciprocals on your own, but here are some typical examples
in SC. Most of SC deals with frequency, so it is more common to encounter the frequency to
duration conversion.
6.6 Frequency and duration (scope, Impulse, .kr)

Synth.scope({Impulse.kr(4)})//freq = 4, dur = 1/4 or 0.25


Synth.scope({Impulse.kr(3)})//freq = 3, dur = 1/3 or 0.33

Synth.scope({Impulse.kr(0.2)})//freq = 0.2, dur = 1/0.2 or 10/2 or 5 sec

Synth.scope({Impulse.kr(0.125)})//freq = 4, dur = 1/0.125 or 1000/125 or 8

Since SC understands both decimal and fractional notation my students have found a cheat; use
the notation that makes sense to you. If you use fractional notation to indicate frequency, the top
number represents the number of events, the bottom number the number of seconds. Thus 10/1 is
ten events in 1 second. Thus 1/5 is one event every 5 seconds (equivalent to 0.2). Using this
method you can drop the decimal notation and calculation and just describe the event duration:
3/5 = three events in 5 second, 35/41 is 35 events in 41 seconds, 68/13 is 68 events in 13
seconds, 15/1 is fifteen times in one second, but in this case you just as well drop the
denominator.
6.7 Frequency expressed as a ratio

Synth.scope({Impulse.kr(4)})//freq = 4, dur = 1/4 or 0.25


Synth.scope({Impulse.kr(3)})//freq = 3, dur = 1/3 or 0.33

Synth.scope({Impulse.kr(0.2)})//freq = 0.2, dur = 1/0.2 or 10/2 or 5 sec

Synth.scope({Impulse.kr(0.125)})//freq = 4, dur = 1/0.125 or 1000/125 or 8

In short, the trigger frequency should be the reciprocal of the duration. Luckily SC will do the
math for us.
6.8 Duration, attack, decay

var dur, att, dec, trigFreq;

dur = 10;
att = dur*0.1;
dec = dur*0.9;
trigFreq = 1/dur;

In the actual assignment you will declare variables as in the example above, then replace the
arguments for Env and Impulse accordingly with those variables.

Just For Fun

This patch is even more complex, but certainly not beyond your capacity so far. It doesn't really
use anything we haven't already discussed. But it does illustrate how envelopes can be used in a
number of control situations. There are also several lines that are "commented" out. They are

42
intended as alternate suggestions. If you uncomment them then you have to comment the line
that they replace. In some spots the commented version is two lines, so be sure you uncomment
them both.

The patch chooses one of the envelopes and that single envelope is used to control the overall
volume and the modulation index (covered later). The most interesting thing you can change
(which will also illustrate envelopes controlling something other than amplitude) is the
ratioa*freq, ratiob*freq, etc. Try changing any combination of these to ratioa*freqCont and that
frequency input will be controlled by an envelope.
6.9 Just For Fun: Crotales (scope, Dust, Impulse, kr, Tspawn, rrand, Env,
perc, choose, EnvGen, LFNoise1, LFNoise0, PMOsc, Mix, AllPassN

(
Synth.scope({
var trigger, freq, indexDepth;
freq = rrand(20, 400);
indexDepth = 1;

//Use one of these triggers


trigger = Dust.kr(1); //Density of attacks (how many per second)
//trigger = Impulse.kr(8);
//trigger = Impulse.kr(
// LFNoise1.kr(1/10, 4, 6).round(1.0), LFNoise1.kr(1), add: -0.3);

TSpawn.ar({ //TSpawn is a timed spawn

var ratioa, ratiob, ratioc, ratiod, factor, dur, envs, env;


var panCont, freqCont, index, indexRange;

//Calculate ratios, carrier and modulator frequency multiples. I'm using


//the same calculations from on of McCartney's example. Changes here
//won't lead to anything more interesting.

ratioa = rrand(1, 12);


ratiob = rrand(1, 12); factor = gcd(ratioa, ratiob);
ratioa = div(ratioa, factor); ratiob = div(ratiob, factor);

ratioc = rrand(1, 12);


ratiod = rrand(1, 12); factor = gcd(ratioc, ratiod);
ratioc = div(ratioc, factor); ratiod = div(ratiod, factor);

dur = rrand(0.1, 2.0); //total duration of each event

//Five possible envelopes. All calculated as a percentage of the duration.

envs = [
Env.perc(dur*0.9, 0.01), //long attack, sharp decay
Env.perc(0.0001, dur*0.9), //sharp attack, long decay
Env.perc(0.01, dur*0.8), //softer attack, long decay
Env.perc(0.1, dur*0.8), //soft attack, long decay
Env.perc(0.2, dur*0.8) //soft attack, long decay
];

//Each new frequency is a ratio of the previous freq


freq = (freq*([3/2, 2/1, 4/3, 3/4, 2/3, 1/2].choose)).wrap(20, 600);

43
env = envs.choose; //which envelope to use
//Index depth is how quickly the sound becomes richer
indexDepth = indexDepth + 0.3;
indexRange = (indexDepth).round(1.0).wrap(1, 24);

//use one of these freqCont lines


freqCont = EnvGen.kr(env, add: 1, levelScale: rrand(100, 400));
//freqCont = EnvGen.kr(env, add: 1, levelScale: rrand(60, 1200));
//freqCont = (EnvGen.kr(env, add: 1,
// levelScale: rrand(60, 1200)))*[1, 1.neg].choose;

//use one of these panCont lines


panCont = (EnvGen.kr(env, add: -1, levelScale: 2))*[1, -1].choose;
//panCont = LFNoise0.ar(20);
//panCont = LFNoise1.ar(2);
//panCont = 1.0.rand2;

o = PMOsc.ar(
ratioa*freq, //or try ratioa*freqCont,
ratiob*freq, //or try ratioa*freqCont,
pmindex: EnvGen.kr(env, add: 1, levelScale: indexRange.rand2),
mul: EnvGen.kr(env, levelScale: 0.3));

p = PMOsc.ar(
ratioc*freq, //or try ratioa*freqCont,
ratiod*freq, //or try ratioa*freqCont,
pmindex: EnvGen.kr(env, add: 1, levelScale: indexRange.rand2),
mul: EnvGen.kr(env, levelScale: 0.3));

o = Mix.ar([o, p]);
o = Pan2.ar(
o,
panCont
);

//try adding this reverb, but I like it without


//4.do({ o = AllpassN.ar(o, 0.05, [0.05.rand,0.05.rand], 1) });

o = o*EnvGen.kr(env, timeScale: 2, levelScale: rrand(0.1, 0.5));


o
},
2, //number of channels
inf, //number of repeats
trigger
)
})
)

44
7. Intervals

7 Assignment seven

TBD

[This section needs to be reworked. Add section on patterns and constructive, destructive
interference.]

The FSinOsc has an array for the frequency argument, but instead of static numbers I've used the
two variables f and f*r. The variable f is equal to 400 and r is equal to 2/1, so [f, f*r] is the same
as [400, 400*(2/1)].

The r = 2/1 represents the ratio of the second frequency. If f = 400 and r = 2/1 ("two to one"),
then f*r is 800, or 400 multiplied by 2 and divided by 1. Even though the division by 1 has no
effect I express it this way so that you can try different ratios to see if they do indeed represent
common intervals. Change the r value to 3/2 (a fifth), 4/3 (a fourth), 5/4, etc. Try 64/45 for a
tritone.
7.1 intervals (Mix, FsinOsc)

(
Synth.scope(
{
f = 400; //fundamental
r = 2/1; //ratio for second note in interval
Mix.ar(FSinOsc.ar([f, f*r]))*0.4
}
)
)

What we hear as an interval is the sum result of constructive and destructive frequencies.
Because both frequencies are periodic, and are a mathematical ratio, the destructive and
constructive points in the sound are recognized as an aggregate, and we "hear" the resulting
pattern in the peaks and valleys. Here is the same example with variables using three values, one
with the first frequency, one with the second, then the third with both added together. Since there
are three values in the array it would play on three channels, but we only have two, so we will
only hear the first two, and the third will be plotted only. Even so, the two frequencies will mix
in the air and we will essentially hear the third plot.
7.2 intervals

(
Synth.scope(
{
f = 400;
r = 2/1;
a = FSinOsc.ar(f, 0.4);
b = FSinOsc.ar(f*r, 0.4);
[a, b, a+b]
}

45
)
)

Here is a plot of an interval of a fifth over one tenth of a second. I've included an image of the
resulting scope window, but feel free to run the code on your own.
7.3 chord plot (plot, Mix, FsinOsc, array)

Synth.plot({Mix.ar(FSinOsc.ar([200, 300], 0.3))}, 0.1)

There are lots of peaks and valleys in this example. The highest peaks are where there is the
greatest amount of constructive interference. We hear the entire pattern as a fifth. Following are
a fourth (4/3), third (5/4), minor sixth (8/5), and finally a tritone (64/45). The constructive peaks
are fewer with each higher ratio, but we still hear the pattern. Dissonance and consonance are
directly related, and can be defined as, the number of peaks in a given amount of time, or the
amount of time between the peaks that define the pattern.
7.4 interval plots (plot, Mix, FsinOsc, arrays)

Synth.plot({Mix.ar(FSinOsc.ar([200, 200*4/3], 0.3))}, 0.1)

Synth.plot({Mix.ar(FSinOsc.ar([200, 200*5/4], 0.3))}, 0.1)

Synth.plot({Mix.ar(FSinOsc.ar([200, 200*8/5], 0.3))}, 0.1)

Synth.plot({Mix.ar(FSinOsc.ar([200, 200*64/45], 0.3))}, 0.1)

46
The same way our eyes recognize the visual pattern, our ears pick up on the aural pattern that
results from the constructive and destructive interference.

Here is one more interesting demonstration of intervals. This code allows you to move two
pitches that comprise an interval from low frequency to audio range. This way you can hear the
pattern as a set of clicks or pulses in low frequency. (As separate pulses we no longer hear
pitches and intervals, but rhythmic patterns of 3/2, 4/3 etc.) Then as you move the mouse to the
right, bringing the frequencies into audio range, you hear the resulting audio interval. In general
we use the term "low frequency" for pitches that are below the level where we perceive pitch. In
the case of sine waves the sound just disappears. But with a wave such as pulse or saw you hear
a single click each period of the wave when the speaker pops back to its original position. First
try running the line below to see what a single frequency sounds like when moving from low to
audio frequency. Move the mouse all the way to the left for low frequency, to the right for audio
frequency.
7.5 audio frequencies (Saw, MouseX, kr, scope)

Synth.scope({Saw.ar(MouseX.kr(1, 1200), 0.5)})

Now use the lines below to listen to two pitches with ratios that result in a recognizable interval.
I've set the ratio to 3:2 using upperRatio and lowerRatio. Change these to 2:1, 4:3, 5:4, etc. The
term "superparticular" means the two numbers differ only by one integer. Try non-
superparticular ratios such as 8:3, 7:2. Remember that the higher these two numbers are the more
dissonant the interval will be.

In the LFSaw the first argument is an array. The first item in the array represents the left speaker
(frequency of lower pitch), the second item represents the right speaker. Run the code and move
the mouse all the way to the left. Listen to the pattern and look at the periods in the scope. If you
find the correct low frequency that matches the size of the window you can actually see the ratio
of the two waves; e.g. where there are three peaks on one channel you should be able to see two
peaks on the other. Next move the mouse slowly to the right until the frequency is high enough
that you perceive a pitch. Move all the way to the right to confirm that it is the interval you were
trying for.
7.6 ratios from LF to audio rate (scope, MouseX, LFSaw)

(
Synth.scope(

47
{
var freq, ratioNum, ratioDenum; //declare two variables
ratioNum = 3; //assign numerator
ratioDenum = 2; //assign denominator
freq = MouseX.kr(1, 110); //freq is mouse control
LFSaw.ar(
[freq, freq*(ratioNum/ratioDenum)],
0.3)
}, 1)
)

48
8. Additive or Fourier Synthesis, Random Numbers, Debugging and Postln,
CPU usage

8 Assignment:
a) Name four methods of selecting pseudo-random elements. That is, the elements selected
would seem random to us, but could be predicted given enough information about the
process and environment.
b) In the code below, insert a postln message to check one of the variables, and then
"Comment out" that line of code.

var numberAtTable, appetizer, meal, dessert, total, tip;


numberAtTable = [2, 4, 6, 10].choose; appetizer = 2.55*numberAtTable;
meal = 12.95*numberAtTable;
dessert = 4.35*(numberAtTable+1.rand + 1);
total = appetizer + meal + dessert;
tip = total*1.5;
total = total + tip;
(total/numberAtTable).postln;

c) In the "random bells" patch, what value would you change to get higher or lower
sounding bells? What variable would you change to make the bells ring for a longer time?
How would you make more bells sound per minute? What variable would you change for a
richer bell (more harmonics)?

d) Using the "random bells" patch, determine how many different random bells you can
keep track of.

e) Using the "random bells" patch determine the maximum number of unit generators that
the CPU on your computer can handle (i.e. overload the cpu usage).

The Pellmen points out that "given the present level of the technology of electroacoustic
instruments (including sampling instruments), it is not yet possible to create an entirely credible
emulation that incorporates all of the vital subtleties of an acoustic musical instrument." I think
this is true of every synthesizer I've worked with (including samplers), except SC. What is
lacking in most other systems is complexity, chaos, and random elements. All of these are
possible in the SC environment. As an example, I've included a patch in the appendix that uses
chaotic, complex, and random elements; Chaotic Bell Patch. This patch shows that you can set
separate envelopes for each of the partials in a sound (complexity), and that you can use random
and chaotic algorithms to restructure the envelopes and upper partials each time the virtual bell is
struck.

[Section explaining additive synthesis.]

The wave table example below contains a section of code that allows you to manipulate upper
partials of a given frequency. Most of the code is a too involved for detailed explanation now.

49
But could you at least identify all messages, variables, and arguments? This is the first time
we've seen a chain of messages. A chain of messages is when you link several messages
together. For example, try each one of the lines 1-5 alone. Use command-p, which means run the
code and print the results. Run lines 1 and 3 several times to see that it is choosing different
values each time.
8.1 message chains (rand, choose, arrays, midicps, postln)

[30, 23, 87].choose; //choose one member of the array

20.rand; //choose a number between 0 and 20

71.midicps; //convert the midi number to cycles per second (Hz).

//Now all of them combined in several lines of code.

a = [30, 56, 32]; //store array in variable a


b = a.choose; //choose a value from a and store it in b
c = b.rand; //choose a random number from 0 to b
d = c.midicps; //calculate the cps given c as a midi number
d.postln; //post the results

It is possible to combine all these lines into a single line or chain of messages. The result of one
message and receiver combination is passed on to the next message. Notice there is no longer
need for variables.
8.2 message chain (array, choose, rand, midicps, postln)

[30, 56, 32].choose.rand.midicps.postln;

You would read the code from left to right: Using the array [30, 56, 32], choose one of those
numbers, using the number you chose, pick a random number between 0 and it, then use that
number and calculate midicps, then post the results. The wave table example uses several
chained messages.

Another interesting item in the example below is the GUI (graphic user interface). SC has some
excellent (though cryptic) GUI tools, which I will probably never treat in depth. But be aware
they exist. When you launch the code below the GUI sections will bring these windows to the
screen:

50
The harmonics window shows blue bars that represent the presence and strength of each
harmonic. Each bar is a multiple of the fundamental: 1, 2, 3, 4, 5, etc. The fundamental in this
example is 200 (the example in the book uses 100), so the harmonics are 400, 600, 800, 1000,
1200, 1400, 1600, etc., or you could say 2*200, 3*200, etc. The window in the top shows the
shape of the resulting wave. (Remember this is a graph of the motion of the speaker.) We won't
do much with the phases window.

First try clicking the randomize button about four times. Notice that even though the sense of a
fundamental pitch stays the same there is a clear change in timbre and a change in the shape of
the wave.

Here is another interesting experiment. Even if you remove the fundamental, or even the first
few harmonics, you still have the same sense of a fundamental pitch. Click at the bottom of the
first and tallest blue bar (this is the fundamental). Notice you still hear the same pitch. Note also
that the wave shape has the same period. Remove the first four harmonics. The sound is very
thin, but you still hear it as the same pitch.

Not many natural sounds have such random harmonic patterns. There usually is a mathematical
relationship in the amplitude of each the successive harmonics. Hit command-period to stop
playback and launch the section of code again. This time try increasing one of the harmonics
near the top by clicking above the existing line. Do you hear the sound change? Here is what is
fascinating to me: When you first increase an upper harmonic you clearly hear the presence of
that harmonic as a new pitch, separate from the fundamental. But the longer you listen, the more
it blends with all the other harmonics into one timbre and one fundamental. Keep in mind these
are all sine waves added together. This is called additive or Fourier synthesis.
8.3 Wavetable (normalize, asWavetable, WavetableView, GUIWindow,
HarmonicsDialog, close)

(
// Wavetable editor GUI
var table, win, har, harmArray;
harmArray = Array.series(24, 1, 1);
table = Signal.newClear(512).sineFill(1/harmArray,

51
[1pi]).normalize.asWavetable;

win = GUIWindow.new("wavetable", Rect.newBy(40,40,410,160));


WavetableView.new(win,
Rect.newBy(8,8,390,140),
table).hElastic.vElastic;

har = HarmonicsDialog.new(table);

Synth.play({ Osc.ar(table, 200, 0, 0.125) });


win.close; har.close;
)

As I said before, each of the sine waves in this example are multiples of the fundamental. In this
case the multiples are 1, 2, 3, etc. up to 24. This is an harmonic spectrum. The book mentions
inharmonic spectra, or a set of frequencies that are not multiples of the fundamental, but random
choices. When we hit randomize earlier we weren't randomizing the frequencies, but rather the
amplitude of each frequency. The frequencies were still harmonic, or mathematically related as
multiples. To generate an inharmonic spectrum you could use a patch that let you enter in a set of
unrelated frequencies and add them together with decreasing volumes. I got out my ruler and
carefully measured the lines on figure 7.3 in the book to guestimate those frequencies. I came up
with 72, 135, 173, 239, 267, 306, 355, 473, 512, 572, and 626. These numbers are inharmonic
because there is no common multiple or pattern. The amplitudes on figure 7.3 are approximately
1.0, 0.44, 0.49, 0.16, 0.38, 0.59, 0.20, 0.03, 0.11, 0.06, and 0.47. These values, if added together,
would total more than 1.0 (about 4) so I need to do some math to lower them all.

Earlier I pointed out that you should not exceed 1.0 in the multiply argument of a sine wave,
otherwise you'll get a distorted sound. If we used the amplitude array the way it is in the chart it
would be too loud and we would loose harmonic information. All of the values need to be
reduced proportionally such that they total 1.0. This is called "normalize." To do this manually
you could add all the values, then divide each value by that total, but there is a message in the SC
language that will normalize an array of values (saving us the calculation). Here is the code. Be
sure to select both lines when you run it.

8.4 normalizeSum

[1.0, 0.44, 0.49, 0.16, 0.38, 0.59,


0.20, 0.03, 0.11, 0.06, 0.47].normalizeSum.postln;

I will round off the results to this: 0.25, 0.11, 0.12, 0.04, 0.1, 0.15, 0.05, 0.01, 0.03, 0.02, and
0.12.

In SC the code for generating the spectrum illustrated in the book would be:
8.5 random spectra (scope, Mix, SinOsc, arrays)

Synth.scope({Mix.ar(
SinOsc.ar(
[72, 135, 173, 239, 267, 306, 355, 473, 512, 572, 626],
0, //phase
[0.25, 0.11, 0.12, 0.04, 0.1, 0.15, 0.05, 0.01, 0.03, 0.02, 0.12]

52
))})

The Mix.ar unit generator is used to mix all the sine waves to a single sound. Without it, the code
would have generated an array of sine waves (11 to be exact) that would have been sent to 11
channels. The graph would have looked interesting, but we would only have heard two of the
waves in the right and left channel. Mix sends them all to one channel.

As mentioned in the book this sounds a little like a gong, or a bell. Any set of random
frequencies would result in a similar sound.

How would we come up with our own set of random frequencies? One method would be to use a
deck of cards. But to start out we should make a rule that we are picking frequencies between
111 and 999, ensuring choices with three digits that only use integers between 1 and 9. Then we
could use a deck of cards, drawing three at a time and use each set of 3 cards for a frequency. We
ignore face cards and 10s. For the amplitude array we can do the same, but one value per card.
This time I'll limit the number of values to 8.

The effect will still be like a bell. This is what I got when I did it: 986, 329, 875, 498, 754, 476,
332, 354 for pitch and 0.6, 0.7, 0.4, 0.5, 0.7, 0.8, 0.9, 0.4 for amplitude. In lines 37 to 43 the
normalizeSum message is included in the code.
8.6 bell (Mix, SinOsc, arrays, normalizeSum)

Synth.scope({Mix.ar(
SinOsc.ar(
[986, 329, 875, 498, 754, 476, 332, 354],
0,
[0.6, 0.7, 0.4, 0.5, 0.7, 0.8, 0.9, 0.4].normalizeSum
))})

Sounds pretty much like the example from the book. Do you think you could distinguish between
the two examples if they were played back to back? I was very surprised at how easy it is to tell
them apart even though the harmonics were chosen at random. I'll demonstrate that below. But
first, we need to discuss random numbers. (This is a somewhat theoretical discussion that is not
essential to synthesis, so you can read it once for good measure even though it might not sink in.)

Random Numbers, Perception

There is no such thing as a random number. Randomness is a state of mind. "Random" is a


human concept based on our ability or inability to perceive a pattern. (Many composers use the
term "pseudo-random" when referring to random processes.) Take, for example, a deck of cards.
When we shuffle them they are "randomized" so that we can't predict what the sequence of cards
would be. But if we knew the order of the cards before the shuffle, and if we kept close track of
the order in which the cards fall during shuffling, we would then be able to predict the order. If
that were the case, the deck would not be randomized, but perfectly predictable. What is the
difference? Our level of perception. So what random really means is mixed up to the point where
a particular human can't predict the outcome. (I do card tricks as a hobby. Card tricks work on

53
the principle that the one being tricked perceives the deck as randomized. To the one doing the
trick, it is not.)

To generate random sequences composers have resorted to a sort of number shuffling using a
formula (see Moore page 409). The computer "knows" the sequence, because it is a formula, but
the formula is so complex that we humans don't recognize the pattern. Inside most programs like
SC there is a random number generator. It is a (very long) series of numbers that is created each
time the program is run. The odd thing is that the order never changes because the same formula
is used each time; the set of numbers rest in the computer's memory intact. So to the computer it
is not random at all. It's like a deck of cards that has been shuffled once, but is never shuffled
again. When you run a program that generates supposedly random events (such as a dice game or
card game), the computer looks for values from this sequence of numbers. The problem is it
always starts at the beginning, so you always get the same values. Much like using the same deck
of cards from the top of the deck without cutting or reshuffling. The order will seem randomized,
but will never change. Part of random, to us, is different numbers each time. So how do you get
it to do different numbers? The solution to this is to start at a point other than the beginning each
time. This is called a random seed. A random seed is analogous to cutting the deck of cards; you
give the computer a seed, or a number, which is used to count into the random number sequence.
The computer then starts its random sequence from that point.

Back in the days of programming on a main frame you had to enter the number each time you
ran the program. In this case you know the number, so it's not really random. In addition,
entering a number each time was a hassle. So the next step in those days was to seed the random
sequence automatically with a number from a "random" event (or to be more precise, an event or
number that a human cannot predict). We achieved this by using the internal clock of a CPU,
which is just a series of numbers rapidly flying by. We couldn't predict what the number would
be since the seed happened when you ran the program, and the clock was at an unpredictable
spot. It seems a little convoluted, but that's how it was (and is) done. It looked something like
this:

srand(time);

The function "time" returned an unpredictable number from the internal clock which was used by
the function "srand" to count into the random number series. Bingo: random numbers (at least to
us humans). (Note: you may have or have had a computer game like solitaire that allowed you to
enter a random seed each time you started a game. Doing this would generate the same series of
cards each time. This way you could play the same game over and over to see if you could win
with a different strategy.)

SC does a random seed automatically behind the scenes. Each time you run a line like 10.rand
SC first reads the number on its internal clock to use as a seed. It then moves into the random
number generator sequence that far and starts its sequence of choices from that point. It is
random to us because we can't predict what number the clock gave or the order of the numbers.

Why is this information important? It is useful to know because often you don't want a random
seed or random events. There are cases where you may want a predictable seed and predictable
events. Each seed number represents that version of pseudo-random numbers. If you give the

54
same seed twice the code will generate precisely the same values on subsequent runs of a
program. The events will seem random in the sequence, but it will be the same pseudo-random
sequence. This is desirable when you are debugging and want to reproduce an error. Also you
will find a particular variation of random events that you like and will want to reproduce it
exactly. (We are now moving out of the theoretical discussion and you need to pay attention
again.)

First I'll demonstrate some random choices and then some random choices using a seed. To
generate a random number you use the message "rand." This message can be sent to any number
and it will return a random choice between 0 and that number. If you use the syntax "55.rand" it
will choose an integer (e.g. 1, 2, 3) between 0 and 55 (not including 55). If you use the syntax
55.0.rand it will choose a floating point number (e.g. 3.3452, 1.2354) between 0.0 and 55.0. Try
both lines several times each to see how random numbers are chosen. Note that I have strung
several messages in a row. The 10.rand is first executed. The result of that expression is then sent
to the postln message.
8.7 rand

10.rand.postln;

10.0.rand.postln;

Running the lines several times over and over is a bit cumbersome, so I'll show you a method of
picking a group of random numbers and storing them in an array. Remember an array is a
collection of items enclosed in brackets. The "Array" can receive messages that manipulate the
array. It understands the message ".fill" with two arguments; the number of items in the array,
and the function used to generate the items in the array. Remember a function is a line or lines of
code enclosed in braces. So example10.2 below will fill the array with random values between 0
and 100. Run it several times to see that it is picking random values each time. (Random to us,
that is.)
8.8 test array (Array, fill)

var testArray;
testArray = Array.fill(12, {100.rand});
testArray.postln;

I would like to point out a mistake that I made often when learning SC. Compare example 10.2
with 10.3. How do they differ? Try running 10.3 and see how different the results are.
8.9 function error

var testArray;
testArray = Array.fill(12, 100.rand);
testArray.postln;

It picks a random number, but it uses that single random number each time it puts a value in the
array. In other words, it uses the same number over and over. The difference is the first example
uses a random number choice enclosed in a function. A function means "run this line of code"

55
while 10.rand on its own means pick a random number. {100.rand} means pick a random
number each time, 100.rand means pick a random number and use it each time.

Now try using a random seed. Run this set of lines four or five times and notice that we do get a
random array, but it's the same array each time. Try changing the seed to something other than 5.
You'll get a new series, but the same series each time.
8.10 random seed (thisThread, randSeed, postln)

var testArray;
thisThread.randSeed = 5;
testArray = Array.fill(12, {100.rand});
testArray.postln;

So now rather than thinking of your aleatoric work as random or pseudo-random series of events,
you can consider the code you write for a piece the DNA for billions of variations, of which one
is chosen each time.

Bell Array

Earlier we discussed the tonal quality of a random set of upper partials. That is, an inharmonic
frequency spectrum. I questioned our ability to recognize these supposedly random collections.
The truth is we are quite good at distinguishing a collection of complex sounds. This is why it we
can recognize a person's voice over the phone after knowing them only a short time.

The code below sets up a collection of bell like sounds, each with it's own pseudo-random
harmonic spectrum. Line 1 uses Array.fill to load the array harmArray with 12 values using the
function {1 + 4.0.rand}. 4.0.rand will return a random value between 0 and 4.0. I use "1 + " etc.
so that the final results are 1 to 5.0. The second array is filled with values between 0 and 1.0. The
"normalize" message is used to insure that the total of all values do not exceed 1.0. These two
variables are used where we before used arrays; in the first and third argument of the SinOsc.ar. I
use the harmArray * fund for the first argument. Since harmArray is an array that contains values
between 1 and 5, then the freq values will be between 200 (1*200) and 1000 (5*200) since the
harmArray are values between 1.0 and 5.0. Run it several times to see that each successive run
produces a different complex inharmonic series. Could you enter a line that would allow you to
use a random seed and produce the same bell quality twice in a row?
8.11 random frequencies (Array, fill, scope, Mix, SinOsc)

(
var harmArray, ampArray, fund; //declare two variables
//in the first array store 12 values between 1 and 5.0
//in the second store 12 values between 0 and 1.0,
//normalize the second array (ampArray) so as not
//to exceed 1.0
fund = 200;
harmArray = Array.fill(12, {1 + 4.0.rand});
ampArray = Array.fill(12, {1.0.rand}).normalizeSum;

Synth.scope({
Mix.ar(SinOsc.ar(harmArray * fund, 0, ampArray));

56
}, 0.1)

Debugging, Postln, Comments

This would be an appropriate time to illustrate some debugging techniques. I've only worked
with a few compilers, but the ones I have used have built in utilities for checking what variables
are as you run the code. SC does not have a lot of these features built in, but you can include
lines of code that will show you what is happening when program. The most commonly used
message is "post" or "postln." These two messages print the object to whichever window is open
at the time. I've inserted two postln messages in the code below which document what is actually
contained in the two arrays.
8.12 postln, post (Array, fill, rand, normalizeSum, postln, Mix, SinOsc)

(
var harmArray, ampArray, fund; //declare two variables
//in the first array store 12 values between 1 and 5.0
//in the second store 12 values between 0 and 1.0,
//normalize the second array (ampArray) so as not
//to exceed 1.0
fund = 200;
harmArray = Array.fill(12, {1 + 4.0.rand});
ampArray = Array.fill(12, {1.0.rand}).normalizeSum;

harmArray.postln;
ampArray.postln;

Synth.scope({
Mix.ar(SinOsc.ar(harmArray * fund, 0, ampArray));
}, 0.1)

Using comments and "Post Here Always" are two useful techniques when printing messages in
code. For example, I may run the code above watching the values in harmArray and ampArray
four or five times. When I arrive at a point where I trust that section of code I may no longer
want to print that information to the screen. Rather than remove the print messages, I can disable
them by "commenting out" those lines. This is done by placing a "//" at the beginning of the line
("//harmArray.postln;"). That section of code will then not be run and the values won't be printed
to the window. If you want to check the values again I can remove the comments. Remember
that you can select a line and use command-/ and shift-command-/ to add or remove comments
automatically. As a matter of fact you can "comment out" other sections of code as a debugging
tool. If something is going wrong removing sections of code using comments is a fast way to
isolate the origin of the error.

"Post Here Always" (under the Lang menu) designates a window to which all messages will be
posted. When I'm debugging I usually open a blank window, select "Post Here Always," then
resize the window so that it is to the side of the computer screen. That way all my error messages

57
from the SC compiler, as well as my "postln" messages from my code, will be printed to that
window.

The next example is expanded to include a Spawn. A Spawn generates a series of events and will
be used in most of the patches we put together. This is probably the most complicated patch
we've done so far. But remember that if you can keep objects, messages, and more importantly
arguments straight in your mind, you should be able to decipher what sections of code do even
though the entire patch may escape you. Take a second to identify each argument for each
message.
8.13 3 bells (Array, fill, scope, Env, perc, Mix, SinOsc, array, choose,
EnvGen, Spawn)

var fArray1, fArray2, fArray3, aArray;


var env, bell;

//declare variables and fill the arrays with random values

fArray1 = Array.fill(12, {1 + 4.0.rand});


fArray2 = Array.fill(12, {1 + 4.0.rand});
fArray3 = Array.fill(12, {1 + 4.0.rand});
aArray = Array.fill(12, {1.0.rand}).normalizeSum;

Synth.scope({
//the perc envelope uses only an attack and a decay
env = Env.perc(0.0001, 2);
//the variable "bell" is used to store the a function
//containing the entire sound we've been using up until now
bell = {
Mix.ar(
SinOsc.ar(
[fArray1, fArray2, fArray3].choose * 200,
0,
aArray
)
)*EnvGen.kr(env)
};
//The first argument in a spawn is the uGenFunc. "bell" has
//been set to the uGenFunc above, so it is used as the first
//argument. The second argument is the number of channels, the
//third argument is the time when the next event will happen
Spawn.ar(bell, 1, 3);
}, 0.1)
)

In this example three arrays are filled with pitch information and a single array is used for
amplitude of each partial. (You could probably do arrays for amp too, but I've left it off for
simplicity.) The variable "bell" is assigned the entire function that includes the Mix.ar, and the
SinOsc.ar, which generates the 12 sine waves and adds them together. For the frequency
argument in the SinOsc (line 39) I use [array].choose syntax to pick one of the arrays, then
multiply that array by 200. (I guess we haven't done any math operations to arrays. Here is how
they work. If you have an array [1, 3, 7, 4] and add it to 10, then 10 is added to each item in the

58
array and an array filled with those results is returned. Try the lines of code below to confirm
this.)
8.14 arrays and math

a = [1, 5, 7, 3, 8];
(a + 34).postln;
(a*1.23).postln;

So the frequency argument for SinOsc is 200 multiplied by one of the fArrays above. The
fArrays are filled with values between 1.0 and 5.0, so the results are going to be frequencies
between 200 and 1000. (Could you insert some postln messages to check these array values?)
The difference between this patch and the previous patch is the Spawn. The Spawn uses the
entire bell function as the first argument and it generates an event every 3 seconds. Since the
"bell" is a function, then it runs the entire function each time and comes up with a set of
frequencies associated either with fArray1, fArray2, or fArray3. In other words, three bells.

Random Bell Patch

What I wanted to illustrate is that even though these are random sets of frequencies, we still can
recognize the pattern of constructive and destructive interference, just like the harmonic series
examples. In this case we recognize them as bells. When you run this code close your eyes and
imagine three bells in front of you. Can you see which one is being struck each time?

How many different bells can we distinguish? The code below does two things. It is a more
efficient example of the patch above (for those of you who are catching on to the code and want
more of a challenge). It also has a variable allowing you to increase the total number of different
bells to see how many you can recognize and keep track of. This patch is more condensed, which
is much more characteristic of code written by SC authors. It assumes the ability to recognize the
nested statements and arguments. You may still get a little lost in the arguments, but you should
at least be able to identify which elements change which aspects of the sound. For example, what
value would you change to get higher or lower sounding bells? How would you make the bells
ring for a longer time? How would you make more bells sound per minute? What variable would
you change for a richer bell (more harmonics)?
8.15 random bells (Array, fill, rand, normalizeSum, Env, perc, Mix, Spawn,
choose, kr, ar)

// Random bells
(
var fArray, aArray, totalBells = 6, baseFreq = 400, totalHarm = 12;
var env, bell, nextEvent = 1.5, envAttack = 0.0001, envDecay = 1;

//create an array of arrays and store it in fArray.


//totalBells will determine the number of possible freq arrays.
//fill each array with 24 random ratios between 1.0 and 5.0

fArray = Array.fill(totalBells,
{Array.fill(totalHarm, {1 + 4.0.rand})})*baseFreq;
aArray = Array.fill(12, {1.0.rand}).normalizeSum;

59
Synth.scope({
env = Env.perc(envAttack, envDecay);
Spawn.ar({Mix.ar(SinOsc.ar(fArray.choose, 0, aArray))*EnvGen.kr(env)},
1, nextEvent)}, 0.1)
)

CPU Usage

Set the number of bells to 3 (totalBells) then raise the totalHarm to 150 (see note on processor
capacity below). Lower the baseFreq to 200. Can you distinguish between the 3 sounds? (With
150 harmonics it is more like a piece of metal than a bell.) Even with 150 random harmonics and
amplitudes the brain is still able to distinguish between the 3 sets. Over and over I'm fascinated
by our ability to distinguish timbre. I'm sure you've had the experience of a friend you haven't
heard from for years call you on the phone, and you immediately recognize their voice. What we
recognize are the upper partials that make up the character of their speech.

You probably have noticed the inforamtion that is flashing by at the top of the screen when SC is
playing a sound. Now the information is important. This is what you see:

I'll start on the right. The last one indicates how many unit generators are in use while playing a
sound. (Examples of unit generators are EnvGen.kr, Spawn.ar, Mix.ar, SinOsc.ar, etc. They are
analogous to modules on a patchable synthesizer.) You'll notice that with the patch we just
created that number gets pretty high. That's because in a sense we are creating 150 Oscillators
and mixing them together. (By comparison, the first generation synthesizers had about 12, at
best, oscillators. That number hasn't increased a whole lot over the years.) So SC can do a lot,
but on the other hand we are limited by processor power. The next two values in the bar at the
top of the screen (from the right) are volume and time. They are self explanatory. The first two
from the left, peak and avg are important to us here. They tell us how much of the computers
processing power is being used to generate the sound. This patch, if you set the totalHarm to 150
or so, should get you pretty close to 100%. (I'm working on a G3 333 mhz, so some slower
machines may show values well above 100.) If you exceed the processing capacity the sound
will drop out. I've been lathering on praise about how powerful SC is and here we are bumping
our heads against a ceiling. But I think it would be more accurate to say the limitation is in my
programming ability. There are several more efficient ways of achieving the effect of this bell
patch. Klank.ar is one.

For example, we could use the additive synthesis model to add together 100 upper partials in the
harmonic spectrum, and use Mix and SinOsc to generate a bright sound, as in lines 73 to 79. The
number of Ugens in that example is 100 and the processor (on my G3) is 24%. But the same
timbre can be generated using a single Saw.ar, as in line 80. In that case, I am only using 2
Ugens and 1.8% of the CPU.
8.16 CPU usage (Array, series, Mix, scope, SinOsc, Saw)

(
var fArray, aArray;

60
fArray = Array.series(100, 1, 1);
aArray = (1/fArray).normalizeSum;
Synth.scope({Mix.ar(SinOsc.ar(fArray*200, 0, aArray))})
)

Synth.scope({Saw.ar(200, 0.5)})

So one might argue that SC is only limited to my programming creativity. The processor
percentage at the top of the window is a way to gauge how efficient your program is.

What would happen if we continued to add random frequencies to the patch using as many as
1,000 frequencies? The bell sound would become more and more unfocused and we would
eventually hear noise. That is covered in the next chapter.

Just For Fun

At the end of each section I would like to add a more complex expanded example of the patches
discussed. This is for those who want to do extra work, and is not a course requirement.

Below are two patches, one which demonstrates harmonic spectra, the other demonstrates
inharmonic spectra. The Ugen Blip takes three arguments: frequency, upper partials, and
amplitude. I've inserted a Noise Ugen for the upper partial argument. It has a frequency of 20 in
one channel and 21 in the other. The mul is 10 (hence -10 to 10, or a range of 20) and the add is
11 (add 11 to -10 and 10 = 1 and 21). So there will be between 1 and 21 upper partials. The
second uses a Spawn to generate SinOsc events with random (inharmonic) frequencies between
600 and 1600.
8.17 harmonic spectra (LFNoise0, Blip, scope)

Synth.scope({Blip.ar(100, LFNoise0.ar([20, 21], 10, 11), 0.3)})

8.18 inharmonic spectra (Env, perc, Spawn, Pan2, SinOsc, EnvGen, kr, choose,
rand2)

(
var freqArray, env;
env = Env.perc(6, 0.1, 0.3); //envelope: (attack, decay, level)
//level should not exceed 1.0
Synth.scope(
{
Spawn.ar(
{
Pan2.ar(
SinOsc.ar(
600 + 1000.rand, //random freq
//between 600 and 1600
mul: EnvGen.kr(env)
//mul SinOsc by these values
//there is a .75 percent chance that
//the value is 0, therefor no sound
)*[0, 0, 0, 0.1].choose,
1.0.rand2)
}, 2, 0.3) //next event (which could be 0) every 0.3 seconds

61
}
)
)

62
9. Subtractive Synthesis, Noise, Synth.write, Synth.record

9 Assignment:

a) Modify any existing patch replacing Synth.play or scope with Synth.write. Write several
sound files with different names. Import them into ProTools and use them for a short
composition study.

In subtractive synthesis you begin with a sound or wave that has a rich spectrum and modify it
with a filter. The results are often more natural because most natural sounds come about in the
same way. The tone of a piano, violin, or guitar begins with a source of random excitation; the
bowed string, the plucked or hammered string. It is periodic repetition of that chaotic event and
the body of the instrument that shapes the character of the sound. The same is true of the human
voice. The vocal chords alone represent an unintelligible but rich fundamental sound source. The
tongue, sinus cavities and shape of the mouth determine the rich tone qualities that make up
speech.

There are many rich sound sources available in SC and other synthesizers. If richness is defined
by the number of sine waves that are represented in the sound, then arguably the richest is pure
noise. In the previous chapter you may have noticed that the greater the number of enharmonic
spectral elements there were the more the sound approached noise. Noise is often defined as all
possible frequencies having equal representation. Though we typically think of noise as a bad
thing, it is an essential part of the variety and character we recognize in natural instruments.

There are 15 or so different types of noise available in SC. Some of them are used primarily for
audio signal, others for control, which we will cover later. The common denominator between all
the types is that they appear to be random, that is, we don't recognize any pattern in the wave.
When building patches the control rate noise is not often heard as a signal, but is used within the
structure of the patch to generate random elements. Listen to each of these examples of noise
below (change the variable "choice" to values between 0 and 4). Notice the differences in how
they look on the scope.
9.1 noise (scope, WhiteNoise, PinkNoise, BrownNoise, GrayNoise, Dust)

(
var signalName, choice;

signalName = ["White", "Pink", "Brown", "Gray", "Dust"];


choice = 0;

Synth.scope({
var signal;

signal = [WhiteNoise.ar(mul: 0.4), PinkNoise.ar(mul: 0.4),


BrownNoise.ar(mul: 0.4), GrayNoise.ar(mul: 0.4),
Dust.ar(3, mul: 0.4)];

signal.at(choice)
}, name: signalName.at(choice))

63
)

To use a noise generator for subtractive synthesis, the ugen should be placed in a filter. Three
common filters are low pass, high pass, and band pass. The objects in SC that represent these
devices are RLPF, RHPF, and BPF. The arguments illustrated below for the .ar message for each
filter are input, frequency cutoff, and the reciprocal of q (bandwidth/cutoff frequency). The input
is the signal being filtered. The cutoff frequency is the point where frequencies begin to be
filtered. Resonance effects how narrow the focus of the filter is. The practical result of a narrow
focus (e.g. 0.05) is the presence of a recognizable frequency at the filter cutoff.

In the examples below the X axis controls the frequency cutoff, the Y access controls the
resonance. The bottom of the screen is a narrow resonance (0.01) and the top is a wide resonance
(2.0).
9.2 Filtered Noise (scope, PinkNoise, MouseX and Y, RLPF, RHPF, BPF)

(
Synth.scope({
var signal, filter, cutoff, resonance;

signal = PinkNoise.ar(mul: [0.1, 0.1]);


cutoff = MouseX.kr(40, 10000, 'exponential');
resonance = MouseY.kr(2.0, 0.01);

RLPF.ar(signal, cutoff, resonance)})


)

(
Synth.scope({
var signal, filter, cutoff, resonance;

signal = PinkNoise.ar(mul: 0.1);


cutoff = MouseX.kr(40, 10000, 'exponential');
resonance = MouseY.kr(2.0, 0.01);

RHPF.ar(signal, cutoff, resonance)})


)

(
Synth.scope({
var signal, filter, cutoff, resonance;

signal = PinkNoise.ar(mul: 0.7);


cutoff = MouseX.kr(40, 10000, 'exponential');
resonance = MouseY.kr(2.0, 0.01);

BPF.ar(signal, cutoff, resonance)})


)

Here are similar examples using a saw wave (which is a rich sound by virtue of the upper
harmonic content) with filters. When a low number is used for resonance on noise a single

64
continuous frequency that corresponds to the filter cutoff can be heard. But with a saw wave the
upper harmonics will resonate as the cutoff frequency passes near and over the upper harmonics.
9.3 Saw with Filter (scope, LFSaw, MouseX and Y, RHPF)

(
Synth.scope({
var signal, filter, cutoff, resonance;

signal = LFSaw.kr(100, mul: 0.1);


cutoff = MouseX.kr(40, 10000, 'exponential');
resonance = MouseY.kr(2.0, 0.01);

RHPF.ar(signal, cutoff, resonance)})


)

These filters are rather simple. A more complex type of filter called Klank allows you to specify
arrays of resonant frequencies, amplitudes, and decay rates. In the real world it is more common
to encounter a space or a body that resonates on a collection of frequencies rather than a single
low or high cutoff. Klank reproduces this phenomenon more accurately than a single filter might.
The first argument for Klank is an array of arrays (two dimensional array).

A two-dimensional is very intuitive. Each item in the outer array is also an array. The outer array
begins with a "`" to protect against multi-channel expansion. (If this were left off SC would
interpret the array as different channels. Go ahead and try it.) The first inner array is the set of
frequencies that resonate, the second is the amplitude of each frequency (default is 1), and the
last is the decay rates of each frequency. The first example below is a steady state sound so the
amplitude and decay arrays are not needed and have been left off. The default is 1, and an
amplitude of 1 for each of the 10 resonant frequencies would overload the output, so the mul is
set to 0.1. The second example loads an array with random frequencies and has a high pass filter.
Run it several times for the full effect.
9.4 Resonant array (scope, Klank, BrownNoise, array, Array, fill)

(
Synth.scope({

Klank.ar(
`[
[100, 200, 300, 400, 500, 600, 700, 800, 900, 1000] //freq array
],
BrownNoise.ar(0.03), mul: 0.1)
})
)

(
Synth.scope({

var out;
out = Klank.ar(
`[
Array.fill(15, {exprand(60, 10000)}) //freq array
],

65
BrownNoise.ar(0.03), mul: 0.05)
})
)

Chimes

Noise sources can also be used to excite a resonant body. Earlier we created a bell using additive
synthesis. In this patch we approach a bell sound from the opposite direction; subtractive
synthesis. The clapper striking the side of a bell is a burst of noise. The physical construction of
the bell or chime resonates on certain frequencies. Those resonant frequencies are sustained and
all other frequencies die away quickly. The remaining frequencies are what make up the
character of the bell or chime.

For this patch we begin with an extremely short (1/100th of a second) burst of noise. The chime
instrument, or function, is placed in the Spawn, one channel, next event in 0.3 seconds, which is
placed in a Synth. The second argument in the scope message is the size of the scope window.
I've set it to 0.5 so you can see the noise bursts.
9.5 chime burst (Env, perc, PinkNoise, EnvGen, Spawn, scope)

(
var chime;

chime = { //Beginning of Ugen function


var burstEnv, att = 0, burstLength = 0.001, signal; //Variables
burstEnv = Env.perc(0, burstLength); //envelope times
signal = PinkNoise.ar(EnvGen.kr(burstEnv)); //Noise burst
}; //End Ugen function

Synth.scope({Spawn.ar(chime, 1, 0.3)}, 0.5)


)

For a resonator we use Klank.ar. It uses Array.fill to generate the arrays, as we did before. Also,
using Array.fill allows us to fill the array using a function. That means that each time we run the
patch the array of frequencies (and therefore the character of the instrument) will be different.
The input argument for Klank is the noise burst.
9.6 chimes (Array, fill, rrand, normalizeSum, round, Env, perc, Klank,
EnvGen, MouseY, Spawn)

(
var chime, freqSpecs, totalHarm = 10;

freqSpecs = `[
Array.fill(totalHarm, {rrand(100, 1200)}), //freq array
Array.fill(totalHarm,
{rrand(0.3, 1.0)}).normalizeSum.round(0.1), //amp array
Array.fill(totalHarm, {rrand(2, 4)})]; //decay rate array

chime = { //Beginning of Ugen function


var signal, burstEnv, att = 0, burstLength = 0.001, masterVol = 0.8;
burstEnv = Env.perc(0, burstLength); //Define envelope
signal = PinkNoise.ar(EnvGen.kr(burstEnv)); //Noise burst

66
signal = Klank.ar(freqSpecs, signal);
signal*EnvGen.kr(Env.perc(0, 4), MouseY.kr(masterVol, 0));
}; //End Ugen function

Synth.scope({Spawn.ar(chime, 2, 1)})
)

The number of harmonics, amplitudes, and decay rates is set with totalHarm. The freqSpecs
array is preceded by the "`" character. Each array in the freqSpecs array is filled automatically
with values generated by the functions in the second argument of the fill message. The
frequencies are a random choice between 100 and 1200, the amp array is a random choice
between 0.3 and 1.0 (normalized and rounded to 0.01), and the decay rate is a choice between 2
and 8.

The variable "signal" is inserted as the second argument to Klank.ar, along with the freqSpecs
array, and that result is again stored in "signal."

The final line of code in the chime function is the final signal multiplied by an EnvGen. You
may have noticed there are several places where you can adjust the amplitude of the signal in a
patch. The same is true with most electro-acoustic music studios. In this patch you can adjust the
volume in the EnvGen, or at the PinkNoise, at the Klank or at the final EnvGen. The last
envelope, as mentioned earlier, is necessary to allow the Synth to release those Ugens used in
each Spawn.

Try adjusting the total number of harmonics. Try adjusting the decay (dec), i.e the length of the
noise burst. We haven't adjusted the attack in any of these examples. A softer attack may
represent a softer strike. Change the range of frequencies in the freq array. Change the decay
rates. Finally, try replacing the Array.fill with your own array of frequencies. (You just have to
make sure you enter the same number of frequencies as stipulated with totalHarm, since those
arrays will be of that size.) Try even harmonics, odd harmonics, slightly detuned harmonics.
Could you write a function that would automatically fill the array with even or odd harmonics?

Synth.write, Synth.record

At this point you may be wondering how you can use these sounds compositionally. One method
is generative composition, described below. To me, this is SuperCollider's strength. But it can
also be used in "classic" electronic composition. In this style the composer uses a multi-track
recorder to combine electronic sounds generated on a synthesizer with concrète sounds recorded
live in a collage to be mixed down for stereo or quad performance. You could very well connect
the computer's audio out to another digital or analog tape recorder and collect the sounds that
way. But, as usual, there is an easier method (assuming you combine the sounds with a digital
editor such as ProTools or Peak). The method is Synth.write or Synth.record. Synth.write writes
the audio to a file on the disk while record plays the file and writes it to a disk at the same time.

The arguments for write and record are (ugenGraphFunc, duration, pathName). The second and
third arguments for Synth.write and Synth.record apply to recording length and file name, but
otherwise write and record work the same play. So any of the patches we have done so far or will
do in future chapters can be "captured" to disk by substituting play with write or record.

67
A word about file path names. (And I'm speaking from empirical experience, not formal
training.) When SC (and most programs) access a file it first looks for the file in the home
directory. That is the same folder where the application is located. If you use a simple file name
for pathName the file will be written and saved in the SC folder (or the folder where SC is
located). A pathname hierarchy is indicated with folder names separated by a colon. It is possible
to save files in folders8 inside the SC folder using colons to indicate the pathname, or a folder
above the SC folder using two colons for the directory above, three for two directories above. So
a file in the same folder as SC is simply "MyFile", if it is in a subfolder it might be ":Data
Files:MyFile", if in another area then perhaps "::Audio:MyFile". For all the examples in this text
I will use a simple file name, so the resulting file will appear in the same folder where SC
resides. File names outside the SC folder will depend on the name of your hard drive and your
folder hierarchy, possibly "MacintoshHD:Documents:CM:Data:MyFile". To write a file to the
desktop the file name might be "MacintoshHD:Desktop Folder:MyFile".
9.7 Synth.record (PinkNoise)

Synth.record({PinkNoise.ar}, 10, "testpinknoise")

The default format of the file is 44.1k, 16 bit, aiff. It can be opened by any application that
recognizes this format. However, the Mac OS will assume the file belongs to SC. If you try to
open it by just double clicking, it will open in SC as text and you will just see gibberish on the
screen. There are a number of strategies for correcting this. You can use ResEdit, choose Get
Info, and change the creator (complicated geeky method), or use a third party program such as
FileTyper or More File Info contextual menu to change them automagically, but this is
unnecessary. The files can be opened, read and edited by Peak, ProTools, SoundSculptor, etc. To
compose in the classic "tape collage" method simply use the Audio menu in ProTools to import
the files created by SC (it even recognizes stereo or quad files), then copy and paste to your
heart's content.

Just for Fun

The patch below uses both a steady state Klank and a burst of noise within a Klank, but each
sound noise and then is filtered. The chimes sound and cavern sound are mixed together at the
bottom by simply adding them together.
9.8 Subtracitive Synthesis Fun (Mix, Array, fill, Pan2, Klank, Decay, Dust,
PinkNoise, rand2, RLPF, normalizeSum, GrayNoise, LFSaw)

Synth.play({
var totalInst, totalPartials, baseFreq, ampControl, chimes, cavern;
totalInst = 12; //Total number of chimes
totalPartials = 15; //Number of partials
baseFreq = rrand(200, 1000); //Base frequency for chimes
chimes =
Mix.ar(

8
These folders must already exist.

68
Array.fill(totalInst,
{
Pan2.ar(
Klank.ar(`[
Array.fill(totalPartials, {baseFreq*rrand(1.0, 12.0)}),
Array.fill(totalPartials, {rrand(0.3, 0.9)}),
Array.fill(totalPartials, {rrand(0.5, 6.0)})],
Decay.ar(
Dust.ar(0.2, 0.02), //Times per second, amp
0.001, //decay rate
PinkNoise.ar //Noise
)), 1.0.rand2) //Pan position
})
);

cavern = OverlapTexture.ar({
RLPF.ar(Klank.ar(
`[ //frequency, amplitudes, and decays
Array.fill(30, {100 * rrand(1, 12) * rrand(1.0, 1.1)}),
Array.fill(10, {rrand(1.0, 5.0)}).normalizeSum
],
GrayNoise.ar([rrand(0.03, 0.1), rrand(0.03, 0.1)]), mul: 0.09
), 1000 //overall freq cutoff
)
},
7, 4, 3, 2, //sustain, transition, overlap, channels
mul: LFSaw.kr(1/60, mul: 0.4, add: 0.5); //Amplitude control
);

chimes + cavern
})
//End Patch

69
10. Karplus/Strong

10 Assignment:
None at this writing.

Another classic example of a noise-based instrument is the Karplus-Strong pluck instrument used
in early computer musics. It is not exactly filtered, but rather a short burst of noise is turned into
a periodic wave using an echo chamber. The theory is that any wave, no matter how complex or
pseudo-random, will be heard as a pitched sound if it is periodic.

A simple illustration of this principle can be done using a digital editor (such as Peak,
SoundSculptor, or ProTools). Here are the steps: Record some noise. Select a small section of the
noise, something smaller than 100th of a second. Make sure you copy a section at a zero crossing.
(A selection of 100th of a second will produce a pitch at 100Hz. 200th of a second will result in
200Hz.) Copy that 100th of a second clip and paste it into a new document. Paste it again about
ten times. Then copy those ten segments, i.e. the entire file, and paste that in ten times, then
select the entire file again and paste that ten times. The final result is a wave that began as noise
but that we hear as a pitch because it is repeated over and over (made it periodic).

Karplus-Strong Pluck Instrument

We begin with a very short burst of noise, so set the attack to 0 and the decay to 0.05. (This is
1/20th of a second. You might think that is small, but it's pretty easy to hear. How short of a
sound do you think you can hear? 1/100th? 1/200th?)
10.1 noise burst (scope, EnvGen, PinkNoise)

(
Synth.scope(
{ //Beginning of Ugen function
var burstEnv, att = 0, dec = 1; //Variable declarations
burstEnv = EnvGen.kr(Env.perc(att, dec)); //Define envelope
PinkNoise.ar(burstEnv); //Noise, amp controlled by burstEnv
} //End Ugen function
)
)

The next step is to send the burst of noise through an echo chamber. We will use CombL.
CombL understands the .ar and has these arguments: (in, maxdelaytime, delayTime, decayTime,
mul, add). The input is going to be the burst of noise we just created. The delaytime and
maxdelaytime are the same for this example. They represent the amount of time, in seconds, the
signal is delayed (the echo). The decaytime is how long it takes for the echo to die away. In this
case, the add is used to mix in the original dry signal. Without this you would only hear the
delayed signal. It may seem like I've complicated the patch by adding the variable out. But you
may see that it actually clarifies the code. "Out" is used to store the original burst of noise and
then is inserted into the input of the CombL. We could just as easily have used this syntax:
CombL.ar(PinkNoise.ar(burstEnv), etc. I find it clearer to split sections up with variables. To me

70
it feels a little more like a patch. In this version of the patch, try changing the delayTime and
decayTime until you are comfortable with how they affect the patch.
10.2 burst and delay (PinkNoise, EnvGen, Env, perc, CombL)

(
Synth.scope(
{ //Beginning of Ugen function
var burstEnv, att = 0, dec = 0.05; //Variable declarations
var out, delayTime = 0.5, delayDecay = 10;
burstEnv = EnvGen.kr(Env.perc(att, dec)); //Define envelope
out = PinkNoise.ar(burstEnv); //Noise burst
CombL.ar(
out,
delayTime,
delayTime,
delayDecay,
add: out); //Echo chamber
} //End Ugen function
)
)

The next step to a Karplus-Strong pluck is to shorten the regeneration or delay time to a value
short enough that we hear hundreds of echoes per second. These echoes of the burst of noise will
be perceived as a periodic wave. To do this, set the delay time to 0.1 (ten times per second, or 10
Hz), 0.01 (100 times per second, or 100 Hz), then 0.001 (1000 Hz), etc. The decay time is the
reciprocal of the pitch we hear (1/100th of a second, 100 Hz). So what delay time would we enter
for the pitch A 440? (That is, 440 times per second.)

SC provides a message that will help us do the calculation: reciprocal. Any number (remember, a
number is an object) will understand the message reciprocal. Try the line below several times
with different values. Check SC's math if you'd like.
10.3 reciprocal

440.reciprocal.postln;

The message reciprocal can be combined with midicps. We still have to deal with MIDI
numbers, but that's easier than frequency.
10.4 midi to cps to reciprocal

69.midicps.reciprocal.postln;

Insert this section into the pluck instrument.


10.5 pluck (scope, midicps, reciprocal, EnvGen, Env, perc, PinkNoise, CombL)

(
Synth.scope(
{ //Beginning of Ugen function
var burstEnv, att = 0, dec = 0.05; //Variable declarations
var drySignal, delayTime, delayDecay = 10;
var midiPitch = 69; // A 440

71
delayTime = midiPitch.midicps.reciprocal;
burstEnv = EnvGen.kr(Env.perc(att, dec)); //Define envelope
drySignal = PinkNoise.ar(burstEnv); //Noise burst
CombL.ar(drySignal, delayTime, delayTime,
delayDecay, add: drySignal); //Echo chamber
} //End Ugen function
)
)

Did the midi number conversion work? How would you check it? (Insert a .postln;)

The next step removes the entire UgenGraph function and places it higher in the code using a
variable for storage. That variable is then used inside a Spawn, as in previous patches, and that is
used as the first argument for Synth.scope. The UgenGraph could have been left in the
Synth.scope, inside the Spawn.ar. The results are the same. But I think this is a little clearer. I've
also shortened the delayDecay. And the midiPitch is no longer a static value, but a random
choice between 32 and 55.

The reason we need an additional envelope at the end is to terminate the overall patch. Without
this envelope the Synth will continue to run each Ugen while continuing to start new Ugens for
subsequent generations of the spawn. With the envelope, the Synth understands to release that
CPU usage.
10.6 Spawn and pluck (Spawn, scope, midicps, reciprocal, EnvGen, Env, perc,
PinkNoise, CombL)

var pluckInst;

pluckInst = { //Beginning of Ugen function


var burstEnv, att = 0, dec = 0.05; //Variable declarations
var out, delayTime, delayDecay = 0.5, midiPitch;
midiPitch = rrand(32, 55); // A 440
delayTime = midiPitch.midicps.reciprocal;
burstEnv = EnvGen.kr(Env.perc(att, dec)); //Define envelope
out = PinkNoise.ar(burstEnv); //Noise burst
out = CombL.ar(out, delayTime, delayTime,
delayDecay, add: out); //Echo chamber which produces pitch
out = out*EnvGen.kr(Env.perc(0, 1)); //overall envelope
out //return this variable
}; //End Ugen function

Synth.scope({Spawn.ar(pluckInst, 1, 0.25)})
)

Listen carefully to each attack and the character of each pitch. It is clearly the same instrument,
yet each note has a slightly different timbre. The character changes because each time the
function "pluckInst" is run a new burst of noise is used, which has a different wave shape.
Traditional presets on synthesizers lack this complexity, which is inherent in natural instruments.

72
Just For Fun: Karplus-Strong Patch

In the expanded K-S patch I've added a set of legal pitch choices. The variable midiPitch is set to
an array of two pitches (left and right), one a random choice of the legal pitch array, the other a
wrapped increment of the pitch array (using count as a pointer). The variable articulation has
replaced delayDecay because it is used to shorten or lengthen each attack. Since this isn't really
perceived as longer or shorter note values, but rather sharp or sustained accents, I use the term
articulation. In the burstEnv the mul value (which represents amplitude) is set to either 0 or 1 for
each channel. 0 will result in no sound. 1 will be full volume. This essentially turns pitches on
and off, creating a rhythmic element. The "out" is placed in a resonant low pass filter (covered
later), and a reverb (covered later).

Try changing the legalPitches array to various scales: whole tone, diatonic, chromatic, octatonic,
quarter tone, etc. Try changing the two values for frequency in the LFNoise1. Try changing the
second and third arguments for LFNoise1. Make sure the third value is greater then the second
(by at least 60). "Uncomment" the postln line to see values printed to the screen. It is common to
place a group of values you want to print. This allows you to use only one "postln" message.
There is one aspect of this code that is very inefficient. The note on/off just turns the volume
down, but the synth still has to calculate those Ugens and plays them, but with no volume. This
uses CPU without sound.
10.7 expanded pluck (scope, midicps, choose, reciprocal, EnvGen, Env, perc,
PinkNoise, CombL, RLPF, LFNoise1, AllpassN, Spawn)

var pluckInst, count = 0;

pluckInst = { //Beginning of Ugen function


var burstEnv, att = 0, dec = 0.05, legalPitches; //Variable declarations
var out, delayTime, midiPitch, articulation;
legalPitches = [0, 2, 4, 6, 8, 10]; //pentatonic scale
articulation = [0.125, 0.25, 0.5, 1.0].choose;
//midiPitch is set to a L&R array of one of the legalPitch choices, plus
//an octave. The left channel wraps through the choices
midiPitch = [legalPitches.choose + [36, 48, 60].choose,
legalPitches.wrapAt(count) + [36, 48].choose];
count = count + 1; //Count is used with wrapAt above
// [midiPitch, count].postln; //For checking values
delayTime = midiPitch.midicps.reciprocal; //Calculate reciprocal
//The mul value (amplitude) for the envelope is set to either 1 (on)
//or 0 (off). This is done for both channels.
burstEnv = EnvGen.kr(Env.perc(att, dec),
mul: [[0, 1].choose, [0, 1].choose]); //Define envelope
out = PinkNoise.ar(burstEnv); //Noise burst
out = CombL.ar(out, delayTime, delayTime,
articulation, add: out); //Echo chamber
out = RLPF.ar(out, LFNoise1.kr([0.5, 0.43], 2000, 2100), 0.5); //Filter
2.do({out = AllpassN.ar(out, 0.01, rrand(0.005, 0.01), 4)}); //Reverb
out = out*EnvGen.kr(Env.perc(0, articulation)); //overall envelope
out //return this value
}; //End Ugen function

Synth.scope({Spawn.ar(pluckInst, 2, 0.125)})

73
74
11. Time Variant Control Sources, Offset and Scaling with Mul and Add

11 Assignment:

a) Fill in the following equations for a ugen with a default output of -1 to 1.


Offset = 500, Scale = 200, Range = _____ to _____
Offset = 1100, Scale = 250, Range = _____ to _____
Offset = 0.5, Scale = 0.4, Range = _____ to _____
Offset = 1.0, Scale = 1.0, Range = _____ to _____
Range = 0 to 1, Offset = _____, Scale = _____
Range = 60 to 1000, Offset = _____, Scale = _____
Range = -500 to 500, Offset = _____, Scale = _____
Range = 0.5 to 1.5, Offset = _____, Scale = _____

Time variant phenomena are critical in creating natural and expressive sounds. The Chaotic Bell
I mentioned earlier is an excellent example of time variant spectra. Each upper partial in that
example has its own envelope, and the envelopes change each time the bell is "struck." Listen to
the timbre carefully and note that different upper partials ring longer with each attack. This is
what the book illustrates on pages 218 through 221. I think time variance is a little complex to
get into at this stage, but I would like to demonstrate a time varied vibrato, which will serve as a
nice introduction to controls in the next section.

Vibrato is a slight fluctuation in either the amplitude (in the case of voice) or the pitch (in the
case of stringed instruments). String instruments roll the finger forward and backward such that
the actual pitch moves between, say A 435 and 445. Essentially moving in and out of tune about
5 times per second. In our previous SinOsc patches the frequency argument was a static value
(such as 440). For a vibrato we need to replace the static value with some function or Ugen that
will change over time smoothly between 335 and 445. The shape of a vibrato is really like a sine
wave: it moves back and forth between two values at a periodic and predictable rate. Could we
use two SinOsc Ugens, the output of one as the freq argument for another SinOsc? Yes, if it is
scaled and offset correctly.

The normal output of a SinOsc is a smooth graph between -1 and 1. If we want to use it as a
control for pitch we need modify it in some way to get values between 435 and 445. This is
where the arguments "mul" and "add" become useful. These two arguments scale and offset
(multiply and add) the normal output of -1 to 1 with a center value of 0.

Offset and Scaling with Mul and Add

So far we have used the mul argument to change the amplitude of a wave. The add argument
offsets the graph of a ugen. It changes the center from 0 to the add value. Here are three lines of
code and graphs illustrating how the add and mul arguments change the wave.
11.1 add and mul; offset and scale

{SinOsc.ar}.plot

75
{SinOsc.ar(mul: 0.2)}.plot

{SinOsc.ar(mul: 0.2, add: 0.5)}.plot

a) default SinOsc b) SinOsc scaled to 0.2 c) scaled to 0.2 and offset to 0.5

This is one of the most confusing concepts in SC because the final results of the offset and scale
are completely different from the offset and scale. You use an offset of 0.5 and a scale of 0.2, but
the range of values ends up being 0.3 to 0.7. I find it easiest to first calculate the center value (the
offset or add), then the scale (the mul).

To generate values useful for pitch the offset and scale must be much larger. The line below
shows an offset and scale of 300 and 100. When you first run the code you won't see anything on
the graph because the default size of the graph is 1 to -1. Use the up arrow key to change the size
of the window.
11.2 confusing; mul: 300, add: 100, range: 200 to 400

{SinOsc.ar(mul: 100, add: 300)}.plot

offset (add): 300, scale (mul): 100, actual result: 200 to 400

76
Note that the offset and scale (300 and 100) are different from the final range: 200 to 400. This is
confusing because when modifying a ugen for audio frequency values you usually think in terms
of a range. But the final range, low and high values, are different from the values necessary in
mul and add. Adding to the confusion is the fact that not all ugens work this way. Only those
with a default value of negative and positive 1. Some ugens, such as an Env and EnvGen , or
LFPulse, have a default range of 0 to 1. So the mul and add are more straightforward. An
EnvGen that is scaled by 200 and offset by 100 will have final values of 100 to 300.

For ugens with a default of plus and minus 1 the following relationships apply.

–The add is the center value of a range.


–The lowest value is the add minus the mul.
–The highest value is the add plus the mul.
–The range is the mul times 2.

It is possible to use variables to clarify the code, but it's usually just as easy to do the math in
your head. The ugen will also run more efficiently without the added variables and math.
11.3 less confusing?

var low, high;


low = 200;
high = 400;
{SinOsc.ar(mul: (high - low)/2, add: (high – low)/2 + low)}.plot

So what values would be use for a SinOsc that we wanted to control the vibrato of a pitch? What
would the center frequency be? What would the range be? An add of 440 would make the center
value 440. A mul of 5 would make the scale of plus and minus 5. So the range of values would
be 435 to 445.

There is one other value we need to consider; the frequency of the control SinOsc. How often
should it move between those two values? About 5 times per second should make a natural
vibrato. So the frequency of the oscillation between 435 and 445 should be 5 Hz. The vibrato
section of the patch then would look like this. I'll use keyword arguments for clarity:
11.4 SinOsc as vibrato

SinOsc.ar(freq: 5, mul: 5, add: 440)

This is different from a SinOsc that was used for audio output. The term used to describe the use
of an oscillator below its range of audible frequency is "low frequency control" or LFO.

In the patch below the SinOsc with a frequency of 5 is stored in the variable "vibrato." That is
then used as the frequency argument for the SinOsc of Synth.scope.
11.5 vibrato

Synth.scope({
var vibrato;
vibrato = SinOsc.ar(freq: 5, mul: 5, add: 440);

77
SinOsc.ar(vibrato, mul: 0.5)})

This is a fairly believable vibrato. But it sounds a little crude. And this is how a lot of the early
electronic music sounded. Most musicians don't dive right into a vibrato, they usually start with a
pure tone then gradually increase the vibrato. Experiment with each of the controls in the vibrato
SinOsc above to see how they affect the sound. Which value affects the speed of the vibrato?
Which value affects the depth? Which of these would you change, and what values would you
use to make the vibrato more believable?

The "mul" argument of the control SinOsc will change the depth of the vibrato. If it is set to 0
there is no vibrato, if it is set to 5 it is at maximum vibrato. The patch above uses a single static
value. For a more believable vibrato, or a time variant vibrato (that changes in real time) we need
a value that will change with time.

In order to change the vibrato in real time, we need a Ugen that will return a value that will also
change in real time. There are countless Ugens that will work. For now we can try Line.kr.
Line.kr returns values that move between a minimum and maximum over a specified length of
time. Here is where it would fit into the patch:
11.6 Line.kr

//Vibrato
Synth.scope({
var vibrato;
vibrato = SinOsc.ar(freq: 5, mul: Line.kr(0, 5, 3), add: 440);
SinOsc.ar(
vibrato,
mul: 0.5)})

The Line.kr doesn't really generate any signal per se. If we insert just Line.kr into a synth
nothing would come out. But inserted into the correct position of an Ugen that will produce
sound gives us real time variant values.

The first argument for Line.kr is the beginning value, the second is the ending value, and the
third is the amount of time in seconds it will take to move between the two points. Try beginning
with a high value (e.g. 10) and ending with 0. Try a faster or slower amount of time.

In this patch we control the depth of the vibrato. Would it be more natural to control the speed
(frequency) of the vibrato over time rather than the depth? What would you change to get such
an effect? Could you control both? Could you also control the overall pitch, e.g. moving between
a range of 300 to 400?

The vibrato SinOsc and the Line.kr are both controls. That is they are controlling the frequency
of another Ugen. Having such precise and virtually limitless control is what attracted early
composers to synthesis. The values above result in a rather natural sounding vibrato. A violin
will rarely exceed a speed of 10 times per second. But with synthesis, it is easy to go beyond
those limits. Try increasing the mul and the freq arguments to values beyond what you would
expect from a natural vibrato: 33, 100, 1200, etc., for freq, and the same for mul. Try changing
the add to other pitches. Remember that the mul in this case should not exceed the add.

78
Otherwise negative values would result. The sounds generated by these excessive values are
unique to synthesis and the electronic age.

79
12. Wave Forms, FM/AM Synthesis, Sequencer, Sample and Hold, Real Time
Monitoring with Peep

12 Assignment:

a) What sidebands would result from and AM patch using two sine waves with a carrier of
500 Hz and a modulator of 111?

b) In the patch below the frequency of the input wave and the trigger for Latch are being
controlled by MouseX and MouseY. Use the mouse to find interesting patterns. Then insert
two Peeps to monitor the frequency of each (wrap the Peep around the MouseX and
MouseY) and once again locate interesting patterns. Note the ratio of the input and trigger
frequency where interesting patterns emerge.

Synth.play({
SinOsc.ar(Latch.kr(
SinOsc.ar(MouseX.kr(1, 10), mul: 400, add: 500),
Impulse.kr(MouseY.kr(3, 15))
), mul: 0.3)
})

Wave Forms

We've used a number of different wave forms in previous chapters. You have probably noticed
that each has a unique tone quality. Wave shape coincides with timbre. Here is an illustration of
some waves available in SC, including noise. The more sharp edges there are in the wave the
brighter the sound will be.
12.1 wave forms (plot, SinOsc, Saw, LFTri, Pulse, PinkNoise, LFNoise0

(
Synth.plot({[
SinOsc.ar(400, mul: 0.5),
Saw.ar(400, add: -0.4),
LFTri.ar(400, mul: 0.5),
Pulse.ar(400, add: -0.4),
PinkNoise.ar,
LFNoise0.ar(1200)]})
)

(
wave = 0; //change to try each wave
Synth.play({[
SinOsc.ar(400, mul: 0.5),
Saw.ar(400, mul: 0.5),
LFTri.ar(400, mul: 0.5),
Pulse.ar(400, mul: 0.5),
PinkNoise.ar,
LFNoise0.ar(1200)].at(wave)})

80
)

There is another reason to understand and use differing wave shapes: LFO (Low Frequency
Oscillator) control. You will encounter the term LFO with virtually every synthesizer. It is used
to describe oscillators that are designed to better generate frequencies below audio range (less
than 60 Hz). You don't actually hear the wave, but you hear the effect the wave has on the
parameter it is controlling. As synthesizers progressed the distinction between low frequency and
audio rate oscillators became superfluous. In SC most oscillators are just as adept at generating
frequencies at 10 Hz as they are at 10000 (even 100000). But in some cases the nature of the
wave is different in low frequency than it is in audio range. Take a look at these two saw waves,
for example:
12.2 Saw, LFSaw, Pulse, LFPulse

Synth.scope({Saw.ar(200, mul: 0.5)})

Synth.scope({LFSaw.ar(200, mul: 0.5)})

Synth.scope({Pulse.ar(200, mul: 0.5)})

Synth.scope({LFPulse.ar(200, mul: 0.5)})

The LFSaw LFPulse are non-band-limited, which results in a much smoother wave, while the
Saw and LFSaw are band-limited, and are more jagged. The LFSaw and LFPulse are
theoretically perfect waves with an infinite number of overtones. This causes problems (aliasing)
in higher frequencies, so the non-band limited wave should only be used for controls. The is
band limited, meaning the overtones will not exceed a given frequency, therefor it should be
used at higher frequencies.

Here are some examples of LFO control using different wave forms. Notice you can "hear" the
shape of the wave as it controls pitch. Take another look at the Pulse and Tri waves in the
example above before running these lines.
12.3 LF waves (SinOsc, LFPulse, LFSaw, LFTri, mul, add)

Synth.scope({SinOsc.ar(LFPulse.kr(3, 0.3, 200, 500), mul: 0.5)})

Synth.scope({SinOsc.ar(LFSaw.kr(3, 200, 500), mul: 0.5)})

Synth.scope({SinOsc.ar(LFTri.kr(1, 200, 1000), mul: 0.5)})

The SinOsc is the same in high and low frequency, so there is no LFSinOsc.

Low frequency control is often used to affect pitch. In natural instruments the changes in pitch
are usually slight and narrow. Earlier we did an example that replicated a vibrato. The speed of
the vibrato was only about 5 times per second, and the width of the vibrato was about 10 Hz.
Vibrato can also be done by controlling amplitude. Likewise, the amount of change is slight.
Here are two examples of pitch and amplitude vibrato using a Triangle wave.
12.4 LF control (SinOsc, LFTri, mul, add)

81
Synth.scope({SinOsc.ar(LFTri.ar(5, 20, 400), mul: 0.5)})

Synth.scope({SinOsc.ar(400, mul: LFTri.ar(5, 0.3, 0.6))})

They are both fairly convincing vibratos, if a bit exaggerated. Notice that I am using the same
Ugen for control: LFTri, but in the first example it is controlling pitch, in the second it is
controlling amplitude. Notice also that the mul and add values are adjusted accordingly; when
controlling pitch they are 20 and 400, when controlling amp they are 0.3 and 0.6.

Do you know of any instrument that has such a wide vibrato? Since the rates and range of both
these are slightly exaggerated they sound unnatural; like the soundtrack of early sci-fi. The faster
the vibrato for amplitude and the faster and wider the vibrato for frequency, the more unnatural it
sounds. These sounds are identified with synthetic music, and are classic uses of voltage control.
Below are the same examples, but this time I've inserted a MouseX.kr that will allow you to
bring the frequency slowly up on the amplitude control, then the same for the frequency control.
Start with the mouse to the left, then move slowly to the right. On the last example I've combined
the frequency and the amplitude of the frequency control into a MouseX and MouseY. Begin this
example in the upper left corner of the screen and first move to the left, then back, then down,
then back, then explore the entire screen.
12.5 synthetic sounds (scope, SinOsc, LFTri, MouseX, mul, add)

Synth.scope({
SinOsc.ar(400,
mul: LFTri.ar(MouseX.kr(1, 1000, 'exponential'), 0.3, 0.6)
)})

Synth.scope({
SinOsc.ar(
LFTri.ar(MouseX.kr(1, 800, 'exponential'), 100, 400),
mul: 0.5)
})

Synth.scope({
SinOsc.ar(
LFTri.ar(12, MouseX.kr(20, 800, 'exponential'), 1000),
mul: 0.5)
})

Synth.scope({
SinOsc.ar(
LFTri.ar(
MouseY.kr(1, 12000, 'exponential'),
MouseX.kr(80, 800, 'exponential'),
1000),
mul: 0.5)
})

FM and AM synthesis

When you exceed an LFO range it sounds like you are tuning a radio. That’s because you are
entering the realm of frequency and amplitude modulation. The am and fm band on your radio

82
use this same technology to transmit signals through the air. (It also sounds surprisingly similar
to bird song. I believe I read somewhere that birds have two voice boxes that modulate one
another in a similar fashion?)

What distinguishes am and fm from LFO are sidebands. They are additional frequencies that
appear as a product of the two modulated frequencies. There are upper and lower sidebands.

In amplitude modulation the sidebands are the sum and difference of carrier frequency (the audio
frequency that is being modulated) and the modulator frequency (the frequency that is
controlling the audio frequency). So a carrier frequency of 500 and a modulating frequency of
112 could result in two sidebands: 612 and 388. If there are overtones in one of the waves (e.g. a
saw wave being controlled by a sine wave), then there will be sidebands for each overtone.
12.6 AM Synthesis (SinOsc, scope, mul, Saw)

Synth.scope({SinOsc.ar(500, mul: SinOsc.ar(50, mul: 0.5))})

Synth.scope({Saw.ar(500, mul: SinOsc.ar(50, mul: 0.5))})

In the example above the outside SinOsc is the carrier and the inner SinOsc is the modulator. So
the sidebands should be 550 and 450. Change the argument 50 to other values to see how it
changes the sound.

In frequency modulation a similar effect takes place. But with FM many more sidebands can be
generated depending on the modulation index. The modulation index is how far the modulating
frequency deviates from the carrier (the amplitude of the wave). In SC it is a little more difficult
to recognize the components of FM synthesis because both the carrier and modulator frequencies
can appear as arguments in a single SinOsc. The add of the modulator can represent the carrier
frequency. For clarity in the example below I've used this form: 400 + SinOsc(etc.). The 400 is
the carrier frequency and the frequency of the second SinOsc is the modulating frequency. In the
example below 400 is the carrier frequency, 124 is the modulator frequency and 100 is the index.
(A higher index results in more sidebands.)
12.7 FM Modulation (scope, SinOsc, mul)

Synth.scope({SinOsc.ar(400 + SinOsc.ar(124, mul: 100), mul: 0.5)})

The code below illustrates the difference between a change in the modulating frequency and a
change in the modulation index. The first example has a MouseX to control the modulating
frequency. The second example has a MouseX to control the index. The third example has
MouseX and MouseY to control both.
12.8 MouseX and MouseY controlling FM frequency and index

Synth.scope({SinOsc.ar(400 +
SinOsc.ar(
MouseX.kr(100, 500), //control freq
mul: 100, //index
), mul: 0.5)})

83
Synth.scope({SinOsc.ar(400 +
SinOsc.ar(
111, //control freq
mul: MouseX.kr(10, 1000) //index
), mul: 0.5)})

Synth.scope({SinOsc.ar(400 +
SinOsc.ar(
MouseX.kr(100, 500), //control freq
mul: MouseY.kr(10, 1000) //index
), mul: 0.5)})

Sequencer

There are two more classic controls I would like to illustrate before we move on. The first is a
sequencer. The sequencer moves through a set of prescribed values at a given rate or trigger.

The sequence is defined by an array preceded by a "`". The sequencer works much like the other
unit generators we've used. It can be used to control any aspect of sound. In this example it is
used to control pitch. If the array is to be a set of pitches, the question of terminology for pitch
must be addressed. An array of frequencies could be used ([440, 224, 534, 456, 767, 875]), but it
is difficult to enter exact frequencies for an equal tempered scale using this terminology. The
message midicps can be used to convert a set of MIDI pitches into Hz, or cycles per second. To
illustrate this, run the following lines. The first converts and prints the midi number 60 (middle
C) to cycles per second. The second converts the entire array of midi pitches to cycles per
second. I find this and other music related messages in SC handy even when not producing
music. SC is a music calculator, so to speak.
12.9 midicps

60.midicps.postln;

[60, 62, 64, 65, 67, 69, 71].midicps.postln;

So for this example I will use an array of MIDI values, then translate them into cycles per
second, then use them in the sequencer. But don't feel you need to be limited to MIDI values.
You can use any set of frequencies, or even random frequencies. You can also fill the array
automatically using Array.fill, or modify the array in a number of ways (illustrated below).
12.10 Sequencer (array, midicps, SinOsc, Sequencer, Impulse, kr)

(
var pitchArray; //Declare a variable to hold the array
//load the array with midi pitches
pitchArray = [60, 62, 64, 65, 67, 69, 71, 72];
pitchArray = pitchArray.midicps; //convert the midi pitches to cps
Synth.scope({
SinOsc.ar(Sequencer.kr(`pitchArray, Impulse.kr(8)), mul: 0.5)
})
)

84
The trigger can also be a random impulse, supplied by Dust.kr.
12.11 Dust.ar (array, midicps, SinOsc, Sequencer, Dust, kr)

(
var pitchArray;
pitchArray = [60, 62, 64, 65, 67, 69, 71, 72];
pitchArray = pitchArray.midicps;
Synth.scope({
SinOsc.ar(Sequencer.kr(`pitchArray, Dust.kr(8)), mul: 0.5)
})
)

You can do a lot of things to an array, including math (e.g. pitchArray = pitchArray + 20),
reverse, scramble, or fill using a function. Consider the lines below. First an array is filled with
random numbers between 60 and 74. Array.fill takes two arguments; first the number of items in
the array, then the function used to fill the array. Next we scramble the array and post it. Then
reverse it and post it. Last we add 12 to the entire array and post it.
12.12 scramble, reverse (Array, fill, postln, scramble, reverse)

var pitchArray;
pitchArray = Array.fill(10, {60 + 24.rand});
pitchArray.postln;
pitchArray.scramble.postln;
pitchArray.reverse.postln;
(pitchArray + 12).postln

Here are examples of each process in an actual patch. Notice that each time you "run" the
example the sequence is different because of the random choices. I use a C scale in the first
example. Would the sound change substantially if you used a chromatic scale?
12.13 sequencer variations (array, scramble, midicps, Sequencer, kr, Dust)

(
var pitchArray;
pitchArray = [60, 62, 64, 65, 67, 69, 71, 72];
pitchArray = pitchArray.scramble.midicps;
Synth.scope({
SinOsc.ar(Sequencer.kr(`pitchArray, Dust.kr(8)), mul: 0.5)
})
)

(
var pitchArray;
pitchArray = Array.fill(5, {60 + 12.rand});
pitchArray = pitchArray.midicps;
Synth.scope({
SinOsc.ar(Sequencer.kr(`pitchArray, Dust.kr(8)), mul: 0.5)
})
)

(
var pitchArray;
pitchArray = Array.fill(12, {400 + 1000.rand});
Synth.scope({

85
SinOsc.ar(Sequencer.kr(`pitchArray, Dust.kr(8)), mul: 0.5)
})
)

(
var pitchArray;
pitchArray = Array.fill((5 + 5.rand), {400 + 1000.rand});
Synth.scope({
SinOsc.ar(Sequencer.kr(`pitchArray, Dust.kr(8)), mul: 0.5)
})
)

[Insert a section using tsched, trepeat, etc.]

Sample and Hold

Another classic synthesis control source is a sample and hold. The SC equivalent to a sample and
hold is Latch.kr. A sample and hold is an interesting way to use the basic wave shapes as a
control source. A wave is used as an input that is periodically sampled, returning the value
sampled at that moment. Using a sample and hold is a quick way to get more interesting patterns
than a single sequence, but more self-similar patterns than an LFNoise source. If the sample rate
is much lower than the frequency of the wave being sampled, then you can actually hear the
shape of the wave being sampled as it is mapped to some control source (such as pitch). Try this
example, and listen for the shape of the Saw wave in the resulting samples and hence in the
frequencies.
12.14 Latch (Blip, Latch, LFSaw, Impulse, mul)

(
Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr(1.1, 1000, 2000), //Input for Latch
Impulse.kr(10)), //Sample trigger rate
12, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
)
})
)

The final control values are a function of the sample source rate and the trigger rate. In the
example above the shape of the wave is evident since the sample rate is much higher than the
wave rate. If the wave rate and sample rate were exactly the same, the same point in the wave
would be sampled, and the same value would be returned.

Much more complex patterns can be derived from this process. But since the trigger is a steady
rate, and the wave is periodic, some type of pattern will always result.

It is also possible to use more complex wave forms as a source. Likewise, the patterns in the
wave will be reflected in the values returned by the Latch. Take the following wave as an
example. Three waves are mixed together to create a more complex, but still periodic wave. The

86
first example below uses a plot to show four waves at 1, 2, 3, and 5.5 Hz, mixed to one wave.
The mul and add are also mixed, so the actual mul and add are 400 and 440, so the final range is
40 to 880. Use the arrow keys to increase the range of the plot window to see the wave.

The second part places the mixed wave into a Latch.


12.15 Complex Wave as Sample Source (Mix, SinOsc, Blip, Latch, Mix, Impulse)

{Mix.ar(SinOsc.ar([1, 2, 3, 5.5], mul: 100, add: 110))}.plot(10)

Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
Mix.ar(SinOsc.ar([1, 2, 3, 5.5], mul: 100, add: 110)),
Impulse.kr(10.2)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
)
})

Different combinations of waves and sampling rates will create different patterns. But how do
you investigate which values will result in pleasant patterns? You could insert a MouseX to try a
range of values (for the sample rate), but how do you know exactly what those values are? In a
previous section we used "postln" to monitor values, but they are only printed once when the
patch first compiles.

The ugen to use for real-time feedback of ugen values is Peep. (See also a discussion on GUI
utilities below.) Peep "reads" the output of a ugen at a given rate and prints it to the window. The
arguments are the ugen or patch combination, an identifying string and the rate. Here is a simple
example using LFNoise0 followed by the Latch patch with a Peep inserted to monitor the
frequency of the sampled wave. Notice that values around the golden mean (6.18, or 61% or 10)
seem to work well.
12.16 Peep (play, SinOsc, Peep, LFNoise0, mul, add, Blip, Latch, Impulse)

(
Synth.play({
SinOsc.ar(
Peep.kr(LFNoise0.kr(5, mul: 300, add: 400), "freq", 5)
)})
)

(
Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
SinOsc.ar(
Peep.kr(MouseX.kr(1.0, 10.0), "rate"),
mul: 200, add: 300), //Input for Latch
Impulse.kr(10)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip

87
)
})
)

The resulting frequencies can fall anywhere within the range. They are not necessarily midi
pitches or scale degrees. There a couple of strategies that will return midi values. One is to round
or "floor" the values down to integers and use midicps to convert that to frequencies. The mul
and add must be adjusted to return values between (e.g.) 60 and 72, as shown below.
12.17 Latch and MIDI pitches (Blip, Latch, SinOsc, MouseX, mul, add, Impulse,
floor, midicps)

(
Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
SinOsc.ar(
MouseX.kr(1.0, 10.0),
mul: 12, add: 60), //Input for Latch
Impulse.kr(10)).floor.midicps, //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
)
})
)

This constrains the possible choices to midi pitches, or a chromatic scale. Limiting the choices to
a scale is a little more complicated since each step can be either whole or half, and may change
based on the scale or mode. DegreeToKey can be used for this purpose.

Just For Fun

12.18 Just For Fun, DegreeToKey sample and hold

TBD

88
13. GUI Interface

13 Assignment:

Create a musical calculator in a GUI window. There should be five linked numerical views
showing the relationships between frequency, midi value, and ratio. When any value is
changed in any window the other windows change correspondingly. The first view should
contain frequency one, the second the midi equivalent, the third frequency two, the fourth
the midi equivalent, and the fifth the ratio between the two. It should look like this:

GUI Interface

GUI (pronounced "gooey") stands for graphic user interface. SuperCollider has a number of tools
to simplify the user's ability to modify the sound. The patches created in SC can be combined
with the graphic user interface features to create a virtual synthesizer, teaching and
demonstration tools, or instruments for real time performance.

The quickest and easiest way to create a GUI window is to use the editor provided under the GUI
menu. First open a new GUI window, then choose Edit GUI from the GUI menu. This brings up
a separate window with items that can be dragged to the GUI window.

Drag a "String View" a "Slider View" and a "Numerical View" to the GUI window. Move them
to the position you want9.You can also resize the window itself or drag it to a different location.
The code that is required to create these windows and views could have been written from
scratch. But there is an easier method. Once you edit a window using this utility you can choose
"Generate Code to Clipboard." The code that would generate that window will be copied to the
clipboard. You can then use paste into a blank window in SC.

Now you can modify this skeleton to fit your needs. For example, you can change the "panel" to
"SinOsc Control" or "StringView" to "Frequency." You can change the position of the items or
the window itself by modifying the list of arguments in the newBy message. Once you have
made these changes, select all the code and run it. The GUI will appear again. From here you can

9
For precise positioning use the arrow keys. Shift arrow changes the size.

89
make additional changes, copy the code to the clipboard and paste again. Or you can return to the
actual code and make changes there, running it to check the results.

The argument list in each newBy message represent the position of the window or the window
components. The units are screen pixels. You can make adjustments by running the code to bring
the window up, then choosing edit, then repositioning them and copying the code again. But I
also use a little math and enter the values by hand to get the look I want. The numbers are, in
order, left, top, right, bottom. The position of the window is relative to the screen. The position
of the items in the window are relative to the window. So newBy(50, 70, 120, 40) is right 50
pixels, down 70, start the rectangle, then from there draw the rectangle right 120, down 40 (see
example below). Or you could think of the first two numbers as the position of the rectangle, and
the second two as the length and height of the rectangle. (To do absolute positions use the "new"
message instead of the newBy.)

So if I wanted a series of sliders, all the same width and height, flush left, but stacked on top of
each other, then the first, third and fourth values would remain the same. But the second value
would be 10 for the first slider, then if the height of the slider is 20 and the distance between the
two sliders were 10, the next slider will be 40 (10 plus 20 plus 10), then 70, then 100, and so on.

You could also be economical and precise using a .do message. The example above could be
generated with 4.do({arg i; SliderView.new(w, Rect.newBy(25, i * 30 + 10, 120, 20) etc.

Anytime the GUI window is visible you can select Edit GUI from the GUI menu and continue to
make changes. When the window looks the way you want you can copy the code again and paste
it into the SC window.

But some aspects cannot be changed in the GUI editor. These include the name of the window,
and the four numbers following the name of the window. The numbers are, in order, beginning
value, minimum value, maximum value, and step value. The first three are self-explanatory. The
fourth is the amount of change that is applied when the view is active and the mouse or arrows
are used to increase or decrease the value.

Here is an example with a string, numerical, and slider view. The values for the slider have been
set to 0.7, 0.1, 0.9, and 0.1; appropriate for volume control. The NumvericalView is set to values
appropriate for frequency control.
13.1 Gui Window

var w;

90
w = GUIWindow.new("panel", Rect.newBy(170, 71, 211, 90));
StringView.new( w, Rect.newBy(15, 10, 70, 20), "Freq");
SliderView.new( w, Rect.newBy(15, 40, 150, 20),
"SliderView", 0.7, 0.1, 0.9, 0.1, 'linear');
NumericalView.new( w, Rect.newBy(95, 10, 50, 20),
"NumericalView", 440, 60, 2200, 1, 'exponential');

Linking GUI Items to a Patch and to Each Other

The next step is to link each value in the window with a patch parameter. First it is important to
understand the variable "w" as an array of items. Each component within the window is added to
the array in the same order as it appears in code. So in the example below the string view is 0,
the slider view is 1, and the numerical view is 2. They are referenced as w.at(0), w.at(1), and
w.at(2). To use them in a patch they must be contained in a Plug.kr.
13.2 GUI Window in a patch

var w;
w = GUIWindow.new("panel", Rect.newBy(170, 71, 211, 90));
StringView.new( w, Rect.newBy(15, 10, 70, 20), "Freq");
SliderView.new( w, Rect.newBy(15, 40, 150, 20),
"SliderView", 0.7, 0.1, 0.9, 0.1, 'linear');
NumericalView.new( w, Rect.newBy(95, 10, 50, 20),
"NumericalView", 440, 60, 2200, 1, 'exponential');

Synth.play({SinOsc.ar(freq: Plug.kr(w.at(2)), mul: Plug.kr(w.at(1)))});

w.close;

It is often useful to link two views in the GUI. In the example below I have changed the slider
arguments so that they are also appropriate for controlling pitch. I've removed the Plug.kr for
mul, and I've linked the slider view and the numerical view using the variables f and g, and the
messages "action" and "value." When an action is applied to f, then g is given the value of f.
When an action is applied to g, then f is given the value of g. Any combination of commands and
links can be added to the function associated with f or g.
13.3 GUI items linked to each other

var w, f, g;
w = GUIWindow.new("panel", Rect.newBy(170, 71, 211, 90));
StringView.new( w, Rect.newBy(15, 10, 70, 20), "Freq");
f = SliderView.new( w, Rect.newBy(15, 40, 150, 20),
"SliderView", 440, 60, 2200, 1, 'exponential');
g = NumericalView.new( w, Rect.newBy(95, 10, 50, 20),
"NumericalView", 440, 60, 2200, 1, 'exponential');
f.action = {g.value = f.value};
g.action = {f.value = g.value};
Synth.play({SinOsc.ar(freq: Plug.kr(w.at(2)), mul: 0.4)});

w.close;

91
Here is a music calculator that makes use of string views, numerical views, and sliders. It looks
fairly involved, but it is mostly duplication. The items in the GUI window respond to the tab key,
the arrow keys, and numbers on the keypad (then return).
13.4 More GUI

var w, f1, f2, m1, m2, c1, c2, r1, r2, r3, pc1, pc2, s1, s2, pc;
pc = ["C", "C#", "D", "Eb", "E", "F", "F#", "G", "Ab", "A", "Bb", "B"];
w = GUIWindow.new("panel", Rect.newBy(30, 80, 560, 160));
StringView.new( w, Rect.newBy(15, 10, 75, 20), "Frequency 1");
f1 = NumericalView.new( w, Rect.newBy(95, 10, 50, 20),
"NumericalView", 440, 0, 4400, 1, 'exponential');
StringView.new( w, Rect.newBy(150, 10, 45, 20), "Midi");
m1 = NumericalView.new( w, Rect.newBy(190, 10, 50, 20),
"NumericalView", 69, 0, 110, 0.01, 'exponential');
StringView.new( w, Rect.newBy(245, 10, 45, 20), "Cents");
c1 = NumericalView.new( w, Rect.newBy(290, 10, 50, 20),
"NumericalView", 900, 0, 1200, 1, 'exponential');
StringView.new( w, Rect.newBy(350, 10, 45, 20), "Pitch: ");
pc1 = StringView.new( w, Rect.newBy(400, 10, 25, 20), "A");
s1 = SliderView.new( w, Rect.newBy(15, 40, 485, 20),
"SliderView", 440, 60, 2200, 1, 'exponential');

StringView.new( w, Rect.newBy(15, 70, 75, 20), "Frequency 1");


f2 = NumericalView.new( w, Rect.newBy(95, 70, 50, 20),
"NumericalView", 440, 60, 2200, 1, 'exponential');
StringView.new( w, Rect.newBy(150, 70, 45, 20), "Midi");
m2 = NumericalView.new( w, Rect.newBy(190, 70, 50, 20),
"NumericalView", 69, 0, 110, 1, 'exponential');
StringView.new( w, Rect.newBy(245, 70, 45, 20), "Ratio");
r1 = NumericalView.new( w, Rect.newBy(290, 70, 50, 20),
"NumericalView", 1, 0, 20, 0.01, 'exponential');
StringView.new( w, Rect.newBy(350, 70, 50, 20), "Pitch: ");
pc2 = StringView.new( w, Rect.newBy(400, 70, 25, 20), "A");
s2 = SliderView.new( w, Rect.newBy(15, 100, 485, 20),
"SliderView", 440, 60, 2200, 1, 'exponential');

f1.action = {
m1.value = f1.value.cpsmidi;
c1.value = f1.value.cpsmidi%12*100;
r1.value = f1.value/f2.value;
s1.value = f1.value;
w.at(7).label_(pc.at((f1.value.cpsmidi%12).floor(1).asInteger));
};
m1.action = {
f1.value = m1.value.midicps;
c1.value = f1.value.cpsmidi%12*100;
r1.value = f1.value/f2.value;
w.at(7).label_(pc.at((f1.value.cpsmidi%12).floor(1).asInteger));
};
s1.action = {
f1.value = s1.value;
m1.value = f1.value.cpsmidi;
c1.value = f1.value.cpsmidi%12*100;
w.at(7).label_(pc.at((f1.value.cpsmidi%12).floor(1).asInteger));
r1.value = f1.value/f2.value;
};

92
f2.action = {
m2.value = f2.value.cpsmidi;
r1.value = f1.value/f2.value;
w.at(16).label_(pc.at((f2.value.cpsmidi%12).floor(1).asInteger));
s1.value = f1.value;
};
m2.action = {
f2.value = m2.value.midicps;
w.at(16).label_(pc.at((f2.value.cpsmidi%12).floor(1).asInteger));
r1.value = f1.value/f2.value;
};
s2.action = {
f2.value = s2.value;
m2.value = f2.value.cpsmidi;
w.at(16).label_(pc.at((f2.value.cpsmidi%12).floor(1).asInteger));
r1.value = f1.value/f2.value;
};

r1.action = {
f2.value = f1.value*r1.value;
c1.value = f1.value.cpsmidi%12*100;
s1.value = f1.value;
m2.value = f2.value.cpsmidi;
w.at(7).label_(pc.at((f1.value.cpsmidi%12).floor(1).asInteger));
w.at(16).label_(pc.at((f2.value.cpsmidi%12).floor(1).asInteger));
s1.value = f1.value;
}

Gui Monitor

Now that we have discussed a few control sources it may be useful to illustrate GUI monitors.
I've illustrated how you can use .postln and Peep to monitor values. But it is also possible to print
values and strings to a GUI window.

Here is a simple example using a Spawn, and a Pulse or Blip instrument. The Pulse or Blip is
chosen at random when the instrument is played (using the variable "wave" as a reference, which
is then used in conjunction with an array and the .at() message). Likewise, random values are
chosen for each parameter. The "tone" variable is given a value between 0.1 and 0.9. When used
in the Pulse instrument this value represents the pulse width, which affects the tone. When used
with the Blip it is multiplied by 24 and used as the number of harmonics. This likewise affects
the tone. The first example shows how these parameters can be monitored using postln. The
second shows how to post these values to a GUI window.
13.5 Simple patch using postln to monitor

Synth.play({

var freq, amp, tone, attack, decay, wave, en, out;

Spawn.ar({

freq = rrand(60, 1600);


amp = rrand(0.1, 0.8);

93
tone = rrand(0.1, 0.9);
attack = rrand(0.001, 0.1);
decay = rrand(1.0, 2.0);
wave = rrand(0, 1);
en = EnvGen.kr(Env.perc(attack, decay), amp);
[freq, amp, tone, attack, decay, ["pulse", "blip"].at(wave)].postln;

out =
[
Pulse.ar(freq, tone, en),
Blip.ar(freq, tone*24, en)
].at(wave);

}, 2, 1);

})

The code for the GUI window looks a little intimidating, but remember it is mostly duplication.
Note also that I have moved the NumericalViews all to the top of the block so that their position
in the w array is sequential and they can be assigned in sequence. The first NumericalView is
w.at(0), the next is (1), etc. To print numerical values, the window item is first referenced using
w.at, then is given a value using the .value message. For text (that is, to change a string text in a
string view) use the .label_("string") syntax.
13.6 using GUI to monitor

(
var w;
w = GUIWindow.new("panel", Rect.newBy(40, 60, 190, 180));
//right column
NumericalView.new( w, Rect.newBy(90, 10, 60, 20),
"NumericalView", 0, 0, 2000); //w.at(0)
NumericalView.new( w, Rect.newBy(90, 34, 60, 20),
"NumericalView", 0, 0, 1); //1
NumericalView.new( w, Rect.newBy(90, 58, 60, 20),
"NumericalView", 0, 0, 1); //2
NumericalView.new( w, Rect.newBy(90, 82, 60, 20),
"NumericalView", 0, 0, 1); //3
NumericalView.new( w, Rect.newBy(90, 106, 60, 20),
"NumericalView", 0, 0, 10); //4
StringView.new( w, Rect.newBy(90, 130, 70, 20), "Wave"); //5
//left column
StringView.new( w, Rect.newBy(10, 10, 70, 20), "Freq");
StringView.new( w, Rect.newBy(10, 34, 70, 20), "Amp");
StringView.new( w, Rect.newBy(10, 58, 70, 20), "Tone");
StringView.new( w, Rect.newBy(10, 82, 70, 20), "Attack");
StringView.new( w, Rect.newBy(10, 106, 70, 20), "Decay");
StringView.new( w, Rect.newBy(10, 130, 70, 20), "Wave");

Synth.play({

var freq, amp, tone, attack, decay, wave, en, out;

Spawn.ar({

94
freq = rrand(60, 1600);
amp = rrand(0.1, 0.8);
tone = rrand(0.1, 0.9);
attack = rrand(0.001, 0.1);
decay = rrand(0.0, 3.0);
wave = rrand(0, 1);
en = EnvGen.kr(Env.perc(attack, decay), amp);
w.at(0).value = freq;
w.at(1).value = amp.round(0.1);
w.at(2).value = tone.round(0.01);
w.at(3).value = attack.round(0.01);
w.at(4).value = decay.round(0.01);
w.at(5).label_(["Pulse", "Blip"].at(wave));

out =
[
Pulse.ar(freq, tone, en),
Blip.ar(freq, tone*24, en)
].at(wave);

out;

}, 2, 1);

});

w.close
)

When are the values updated? Each time the function inside the spawn is run. So with each
spawn event a new set of values are printed to the window. If you want to update values during a
steadily running patch, you need to use synth.schedule or synth.trepeat.
13.7 Updated values in GUI

var w, inst;
w = GUIWindow.new("Flashing Label", Rect.newBy(20, 70, 200, 80));
StringView.new( w, Rect.newBy(10, 10, 80, 20), "--"); //w.at(0)

Synth.play({arg synth;

var control, out;


control = LFNoise0.kr(4, mul: 1000, add: 1060);
out = SinOsc.ar(control, mul: 0.1);
synth.trepeat(0, 0.75,
{w.at(0).label_(control.poll.asString);
w.at(0).labelColor_(rgb(0,0,0))});
synth.trepeat(0.25, 0.75,
{w.at(0).label_(control.poll.asString);
w.at(0).labelColor_(rgb(255,0,0))});
synth.trepeat(0.5, 0.75,
{w.at(0).label_(control.poll.asString);
w.at(0).labelColor_(rgb(0,0,255))});
out
});

w.close;

95
Here is a more involved example. The type of control is what changes in this patch, so the
control type is printed to one of the strings in the window. If the control type is a sequence, then
the sequence is printed to a series of string views. This set of string views is created using the .do
message. The pitch class is referenced using an array of strings, and the octave is parsed using
the midi value and .div.
13.8 complex GUI monitor

(
var w, inst;
w = GUIWindow.new("Monitor", Rect.newBy(20, 70, 480,
120)).backColor_(rgb(176,17,22));
StringView.new( w, Rect.newBy(10, 10, 90, 20), "Control type"); //w.at(0)
StringView.new( w, Rect.newBy(10, 30, 90, 20), "Nil"); //1
StringView.new( w, Rect.newBy(100, 10, 90, 20), "Control freq"); //2
NumericalView.new( w, Rect.newBy(110, 30, 60, 20),
"NumericalView", 0, -1e+10, 1e+10, 0, 'linear'); //3
StringView.new( w, Rect.newBy(200, 10, 90, 20), "Freq Range"); //4
NumericalView.new( w, Rect.newBy(210, 30, 60, 20),
"NumericalView", 0, -1e+10, 1e+10, 0, 'linear'); //5: low range
NumericalView.new( w, Rect.newBy(280, 30, 60, 20),
"NumericalView", 0, -1e+10, 1e+10, 0, 'linear'); //6: high range
StringView.new( w, Rect.newBy(10, 55, 90, 20), "Sequence"); //7
10.do({arg i; StringView.new( w, Rect.newBy(i*35 + 100, 55, 25, 20),
"Nil")}); //8-18

inst = {
var freq, seq, mul, add, controls, name, contIndex, pitchString;
//assign variables
pitchString =
["C", "C#", "D", "Eb", "E", "F", "F#", "G", "Ab", "A", "Bb", "B"];
freq = rrand(4, 15);
mul = rrand(100, 1000);
add = mul + rrand(60, 500);
seq = Array.fill(10, {rrand(38, 72)});
contIndex = rrand(0, 3);
//Print control type to window
w.at(1).label_(
["LFNoise", "Sample&Hold", "Sequence", "Pulse"].at(contIndex));
w.at(3).value = freq; //write freq to window
w.at(5).value = add - mul;
w.at(6).value = mul + add;
seq.do({arg i, c; //print pitch class to appropriate window item
if(contIndex == 2, //only if control is 2 (seq)
//Parse the midi value and print corresponding string and octave
{w.at(c+8).label_(pitchString.at(i%12)++i.div(12).asString)},
{w.at(c+8).label_("--")}) //otherwise, print a dash
});
controls = [ //Four types of controls
LFNoise0.kr(freq, mul, add), //Control 0
Latch.kr( //Control 1
LFSaw.kr(freq*rrand(0.5, 0.8), mul, add), //Input for Latch
Impulse.kr(freq)), //Sample trigger rate
Sequencer.kr(`(seq.midicps), Impulse.kr(freq)), //Control 2
LFPulse.ar(

96
freq,
LFNoise0.ar(0.3, mul: 0.35, add: 0.45),
mul, add) //Control 3
];
Blip.ar( //Audio Ugen
controls.at(contIndex), rrand(1, 10),
mul: EnvGen.kr(Env.linen(0, 2, 2.3, 0.5))
)};

Synth.play({Spawn.ar(inst, 2, 5)}); //Play the inst every five seconds

w.close; //close the window


)

I've also found that instead of creating a separate window for each value, it is easy to declare just
a few large (long) string views and then printing a group of values and strings to the single string
view using the concatenate message. In this case the numbers and number arrays must be
converted to strings using ".asString." It is also useful to limit the number of decimal points for
floating point values using the .round message. The concatenate message links two strings
together. "this " ++ "that" will return "this that."
13.9 GUI monitor; single string view

w = GUIWindow.new("Parameter Monitor",
Rect.newBy(20, 70, 550, 100)).backColor_(rgb(72,112,186));
StringView.new( w, Rect.newBy(5, 0, 500, 15), "String Monitor"); // w.at(0);
StringView.new( w, Rect.newBy(5, 20, 500, 15), "--"); //w.at(1);

a = rrand(60, 1000);
b = rrand(1.0, 2.0);
c = Array.fill(rrand(3, 6), {rrand(0, 10)});

w.at(1).label_(
"Values a: " ++ a.asString ++
" b: " ++ b.asString ++
" b rounded: " ++ b.round(0.01).asString ++
" c: " ++ c.asString);

The formatting and spacing in this example relies less on the Rect.newBy arguments and more
on the number of spaces you insert in the strings. If you are used to C you may be tempted to
insert backslash characters such as a \t (for a tab). But these characters are not honored by the
string label. So spacing is achieved with spaces.

The patch below uses a GUI monitor. A few notes:

The code for the GUI window is enclosed in parentheses. This has absolutely no affect on the
patch. But it does make it easy to select and run the GUI code only. This is handy for making
small adjustments to the GUI window without having to run the entire patch.

The color for the window has been adjusted. The first and second string view are at the same
height. They have been split so that the color of the second can be lightened.

97
The code that changes the string views in the window are contained in the seqInst function. This
means that each time the function is run (the event is played), the GUI is updated with the new
values. All of the values except dur are contained and defined within the function seqInst. But
dur comes from the Pbind environment. It is passed to the seqInst as an argument. This argument
is not used in any part of the seqInst function. It is passed solely so that it can be included in the
GUI strings.

Floating point values have been rounded for more consistent display.

The harmonic and pitch array are printed as a single unit. The entire array is converted to a string
with the .asString message. You may notice that arrays larger than 10 have an odd character in
the middle; a small square box. This comes from the internal formatting SC uses when printing
an array using the postln message. If you send a postln message to an array containing more than
10 items SC will print 10 values, then a carriage return, then the remaining values. The square
box is the carriage return character. But the string view label doesn't recognize it and prints a
square box. I don't know how to correct this. One solution might be to print them one at a time
using a .do. But I've concluded it is a small, tolerable glitch.

The message .midipcs converts midi numbers to pitch class strings. This section of code does not
come with the SC package and if you run the example below you will get a message saying
.midipcs is not understood. You can remove it without affecting the patch.

It is a small instance method I've created and included in my own SimpleNumber.sc file. I've
included the method below. To use it, insert it into the SimpleNumber.sc file and recompile. You
also have to insert an instance method in the SequencableCollection.sc file. See below for more
information on modifying existing default SC files.
13.10 GUI monitor; seqInst

var seqInst;

(
w = GUIWindow.new("Parameter Monitor",
Rect.newBy(20, 70, 550, 100)).backColor_(rgb(72,112,186));
StringView.new( w, Rect.newBy(5, 0, 150, 15), "Sequencer Patch"); // w.at(0);
StringView.new( w, Rect.newBy(170, 0, 300, 15),
"Press Command-period to stop playback.").labelColor_(rgb(255, 200, 200));
StringView.new( w, Rect.newBy(5, 20, 500, 15), "--"); //w.at(2);
StringView.new( w, Rect.newBy(5, 40, 500, 15), "--"); //w.at(3);
StringView.new( w, Rect.newBy(5, 60, 500, 15), "--"); //w.at(4);
);

seqInst = {arg dur; //dur comes from the Pbind environment


var pitchArray, trigger, midiPitch = 34, increment = #[2, 5], out;
var interval, harmArray, maxDelay, lDelay, rDelay;
harmArray = Array.fill(rrand(6, 12), {rrand(2, 12)}); //fill harmonic array
pitchArray = Array.fill(rrand(6, 12), //fill pitch array
{midiPitch = midiPitch + increment.choose; midiPitch%120}).scramble;
trigger = rrand(6.0, 10.0).round(0.1); //select a trigger time
maxDelay = 1.0; //set maximum delay
interval = [2, 5, 6, 7, 9, 12].choose; //choose interval for right channel
lDelay = rrand(0.1, maxDelay).round(0.01);

98
rDelay = rrand(0.1, maxDelay).round(0.01);
//Print a series of concatenated strings to string view at w.at(2)
w.at(2).label_("Trigger: " ++ trigger.asString ++
" Interval: " ++ interval.asString ++
" Left Delay: " ++ lDelay.asString ++
" Right Delay: " ++ rDelay.asString ++
" Next: " ++ dur.round(0.1).asString);
//Print harmonic and pitch array to string views 3 and 4
w.at(3).label_("Harmonic Array: " ++ harmArray.asString);
w.at(4).label_("Pitch Array: " ++ pitchArray.midips.asString);

out = Blip.ar((Sequencer.kr([`pitchArray, `(pitchArray+interval)],


Impulse.kr([trigger, trigger]))).midicps,
Sequencer.kr(`harmArray, Impulse.kr(trigger)), mul: 0.5);
2.do({ out = AllpassN.ar(out, maxDelay, [lDelay, rDelay], 1,
EnvGen.kr(Env.linen(6, 6, 1, 0.4)), //reverb envelope
out*EnvGen.kr(Env.linen(0, 3, 8, 0.9))) }); //dry signal envelope
out*EnvGen.kr(Env.perc(0, 14, 0.5))
};

Pbind(
\ugenFunc, seqInst,
\dur, Pfunc({rrand(5.0, 8.0)})
).play;

w.close;

13.11 midipcs and midips

//Include the lines below in SimpleNumber.sc

midipcs { arg acc = 0; //returns pitch string, given midi number


//acc forces accidental, -1 is flat, 0 is default, 1 is sharp

^#[["C", "Db", "D", "Eb", "Fb", "F",


"Gb", "G", "Ab", "A", "Bb", "B"],
["C", "C#", "D", "Eb", "E", "F", //default
"F#", "G", "Ab", "A", "Bb", "B"],
["B#", "C#", "D", "D#", "E", "E#",
"F#", "G", "G#", "A", "A#", "B"]
].clipAt(acc + 1).wrapAt(this);
}
midips {arg acc = 0;
^this.midipcs(acc) ++ (div(this, 12) - 1)
}

//Include these lines in SequencableCollection.sc

midips {arg acc = 0; ^this.performBinaryOp( 'midips', acc ) }


midipcs {arg acc = 0; ^this.performBinaryOp( 'midipcs', acc ) }

[This ends the first section; Future sections: wave tables, granular synthesis, LFO controls,
balanced modulator, external controls (mouse, wacom pad),]

99
100
Section II Computer Assisted Composition

14. Numbers, Operators, Music Functions

Operators, Precedence

14 Assignments

a) Write these lines in receiver notation:

max(45, 50)
midicps(33)
rand2(20)

b) Write these lines in functional notation:

20.rand
13.max(5)
25.67.round(0.5)

Write a line of code that:

c) chooses a random midi number between 30 and 60, then posts the cycles per second for
that midi number.
d) returns the lesser (min) of 23 and a random number between 0 and 100

Once you have written lines of code you can use SC to evaluate the expression. "Evaluation" can
result in a message from the system about the code, or numbers resulting from the expression
(like a calculator), or ultimately a sound from a synthesizer patch or melodic sequence. This is
our goal.

There are two ways to evaluate a line; hitting the "enter" key just evaluates or runs the code,
command-p evaluates the code and prints the result to the screen. To evaluate, you select the
lines, or if it is only a single line just place the cursor on that line, then hit "enter" or com-p. Try
it with the lines below (do them one at a time). Try both com-p and enter.
14.1 Evaluation

"My string"

You will notice that if you hit enter nothing seems to happen. While SC did indeed evaluate the
code there are no instructions requiring action, so it did nothing. It just looked at the lines and
pondered them. Using com-p will evaluate and print, but even this result is not very interesting.

101
In the second line we asked it to evaluate a "5" and it said, yes, it is a "5." It's more interesting if
you have some operators. In Ex. 10.2 I add the operators "+", "/", "*", and "-" to the expressions.
Try each line separately (type them in, select the line or position the cursor in the line, then hit
com-p). These simple lines of code perform the same operations as a basic calculator. SC can do
much more, but this is a beginning to understanding what code is and how SC runs the code.
14.2 Operators (+, /, -, *)

1 + 4

5/4

8*9-5

9-5*8

9-(5*8)

The last three examples above require a discussion of precedence. Is the third expression, "8*9"
then "-5," or "9-5" then multiply that by 8? 8*9, then -5 returns 67 but 9-5, then *8 yields 32.
They are different results because they differ in precedence. Precedence is the order in which
each operator and value is realized in the expression. The first expression is 8*9, then -5, while
the second is 9-5, then *8. The precedence for SC is quite simple; enclosures first, then left to
right. Since the 8*9 comes first in the first expression it is calculated before the -5. In the line
below it, 9-5 is calculated first then that result is multiplied by 8. The last line demonstrates how
parentheses (enclosures) will force precedence (i.e. the compiler does the expression in
parentheses first); 5*8, then that result is subtracted from 9.

Can you predict the result of each line before you evaluate the code?
14.3 More operators

1 + 2 / 4 * 6

2 / 4 + 2 * 6

(2 * 6) - 5

2 * (6 - 5)

Try these other binary operators: > greater than, < less than, == equals, % modulo.
14.4 Binary operators (>, <, ==, %)

10 > 5

5 < 1

12 == (6*2)

106%30

102
The > (greater than) and < (less than) symbols return an interesting result: "true" and "false." SC
(and most languages) understands the value of each of the numbers and understands that 10 is
greater than 5 (therefore "true") and 5 is not less than 1 ("false"). We will use this logic in later
chapters.

Modulo is a very useful operator that returns the remainder of the first number after dividing by
the second. For example, 43%10 will reduce 43 by increments of 10 until it is less than 10, then
return what is left. The result is 3. 12946%10 is 6. (This is much easier to understand than
explain.)

Can you predict the results of these expressions?


14.5 Predict

(8+27)%6

((22 + 61) * 10 )%5

All of these examples use integers. Integers are real numbers. (If you don't remember math, that's
numbers without the decimal point: 1, 2, 3, 4, etc.). Numbers that use a decimal are called
floating-point values. In SC you express integers by just writing the number (7, 142, 3452).
Floating point values must have the decimal with numbers on both sides; 5.142, 0.5 (not .5)

Messages, Arguments, Receivers

The way you get things done in smalltalk is to combine messages with arguments. The message
usually has a meaningful name like "sum" and is followed by parentheses that enclose arguments
separated by commas. Such a message might take four arguments, and would be written "sum(1,
3, 4, 7)." "sum" is the name of the message (or function) and 1, 3, 4, 7 are the arguments. The
single name "sum" followed by the two parentheses represents a collection of coded expressions
buried deep in the SC program that come together to perform the actions and return the result
you need, such as summing a group of numbers.

Here are some typical messages used in computer music (in functional notation) followed by a
comment (//) which describes the returned value. The computer ignores anything after the "//" of
each line. This allows you to describe the line of code in more detail.
14.6 Music related messages (cos, abs, sqrt, midicps, cpsmidi, midiratio,
rand, rand2, rrand)

cos(34) //returns cosine

abs(-12) //returns absolute value

sqrt(3) //square root

midicps(56) //given a midi number, this returns


//the cycles per second in an equal tempered scale

cpsmidi(345) //given cps, returns midi

103
midiratio(7) //given a midi interval, returns ratio

ratiomidi(1.25) //given a ratio, returns midi number

rand(30) //returns a random value between 0 and 29

rand2(20) //returns a random value between -30 and 30

rrand(20, 100) //returns a random value between 20 and 100

There are a few special terms in computer languages. Among them are nil, true, and false. We
use these terms often when we test a value and have the computer do one operation if the
outcome is true, and another if it is false. Try running this line of code 10 or 20 times and see
how often it returns "true," how often "false."
14.7 Coin

coin(0.7) //returns a true 70% of the time, false 30%

The examples above are all in functional notation (I'm more used to them since I have worked in
C so much). When you put the first argument before the function name separated by a period it is
called receiver notation, and we use the terms message (for "cos") and object (for 30). It is the
same as the examples above, but in a slightly different syntax. (The period is called a dot, so you
would say "thirty dot cosine" and "zero point seven dot coin.")
14.8 Reciever notation (cos, coin, rand)

30.cos //same as cos(30)

0.7.coin //same as coin(0.7)

20.rand //same as rand(20)

7.midiratio

Binary functions have two arguments.


14.9 Binary functions (min, max, round)

min(6, 5) //returns the minimum of two values

max(10, 100) //returns maximum

round(23.162, 0.1) //rounds first argument using second argument

The two arguments are separated by a comma. Each argument can be an expression similar to the
ones we started out with. The "min" example below returns the smaller value of the expression
5*6, and 35. (Note: these examples don't make a whole lot of sense out of context. Why would
you ask a computer to return the lesser value when you can easily do the math on your own? It
will become clear as we do more examples with variables.)
14.10 min and max

104
min(5*6, 35)

max(34 - 10, 4) //returns the maximum of two values

Binary functions can also be written in receiver notation. But you can't put both arguments in
front of the function. Instead, you place the first argument before the function call with the
second argument in the parentheses. The object in the line below is 6, the message is min, and 5
is an argument.
14.11 Receiver notation

6.min(5) //same as min(6, 5)

How do you decide when to use functional notation (max(10, 5)) or receiver notation
(10.max(5))? A lot of it is based on personal style and context. They are both valid.

One more point and then you can do the assignments. You have seen that I can put an entire
expression inside a function for an argument (e.g. max(10-5*32, 154)). The series of numbers
"10-5*32" is an entire expression, understood independently by SC. This is called nesting.

It is also possible to "nest" functions and messages. That means I can put one function inside
another as an argument for that function. Instead of two numbers as arguments inside the max
function (max(10, 5)), I can nest another function inside the parentheses. In ex. 3.12 SC first
calculates the maximum of 10 and 4, then the minimum of 2 and 5. The results, or the return of
those two functions are used as arguments in the outer max(). The second example chooses a
random number between 0 and 100, then another random number between 0 and 13 (using
receiver notation), then finds the maximum of those two values, then calculates the cycles per
second using that number as a MIDI pitch.
14.12 nesting (max, min, midicps, rand)

max(max(10, 4), min(2, 5))

midicps(max(13.rand, 100.rand))

So far we have only evaluated single lines of code. Most patches and programs are hundreds of
lines of expressions. The syntax for separating several lines of code is to end the line with a
semicolon. When you run a group of expressions the cpu evaluates each one in sequence. To run
a section of code, select all of the lines and press enter.

Try taking one of the semicolons off and see what happens. You will get an error message in a
new window. The error messages tries to be as helpful as possible, perhaps pointing to the point
where it did not understand.
14.13 Several lines of code (midi, postln, max)

"This pitch is C5".postln;


72.midi.postln;
"This is the lesser of 32 and 5".postln;
max(32, 5).postln;

105
As you have learned from the Digital Synthesis section, you can create strings of messages and
send them to one object. In ex. 10.14 I have sent the postln message to the midicps, which is a
message to the object 53. In other words, convert 53 to cps and post with a line the results.

14.14 message strings (midicps, post, min)

53.midicps.postln; //return the cps of the midi


//value 53 and post the result with a return

45.min(5).post; //calculate the minimum of 45


//and 5 and post the result

You should now be able to do the assignments.

106
15. User Defined Functions with Arguments, Expressions, Variables

15 Assignment

a) Write a function that calculates and returns the ratio of two MIDI pitches. For example,
myIntFunc(65, 72) would first calculate the interval (72-65 =7) and return the ratio for a
fifth.

b) Write a function that chooses a random frequency between two given frequencies, then
prints that frequency and the first five harmonics as defaults, but will take 5 arguments for
any harmonic. Call the function using defaults and arguments. For example
myHarmFunc(200, 600) would return a random frequency between 200 and 600, and the
second, third, fourth, and fifth harmonics. myHarmFunc(200, 600, 5, 10, 11, 16) would
return a random frequency between 200 and 600 and the fifth, tenth, eleventh and sixteenth
harmonics.

Variables are names for memory locations in the computer. The names of the variables are
determined by the programmer. They are called variables because they change as the program
runs. I think of them as containers like mail slots where you can place and retrieve numbers,
arrays, strings, characters, entire functions, etc. There are lots of types and sizes of variables. In
other languages you had to keep close track of what they were and how big they were. In the SC
language it’s a lot easier. SC figures out how big they are and what kind they are out of context.
If you haven't programmed before, believe me, this is a major plus. All you have to do is declare
them at the beginning of the program. Declaration uses the syntax "var" followed by the name
you want to use, a comma, followed by additional names of other variables followed by commas.
A semicolon is used at the end of that expression, like this: "var myVar1, myVar2, anotherVar;"
You can name them anything you want, with a few rules; you can't start with a number, and they
have to be one contiguous word. You also need to start them with lower case letters. Many
programmers run several words together and use a capital letter at the beginning of each new
word: nextPitchChoice, intervalCount, originalSeries, etc.

Giving a variable a value (or storing information in memory at that location) is called
assignment. The syntax is "variable = value;". The whole point to a variable is that they can
change while the program runs. For this example I'll use simple variable names: "a", "b", and
"c."

A handy SC convention provides that single letter variables such as "a" and "b" can be used
without being declared. So the "var a, b, c;" is a bit redundant. I include it here for clarity. Single
letter variables are handy when trying quick examples of code, but they should not be used for
actual projects. Use meaningful variable names such as intervalCount.

Up until now we have run only a single line of code. The next example consists of several lines.
SC reads and evaluates them in descending order. In the case of the second to last line, it
evaluates the parentheses first, then the rest of the line. Select the entire example and hit return.
(I've enclosed the entire example in parentheses so that you can select all the lines quickly by

107
double clicking on either of the outer most parentheses.) In this example we don't have to use
command-p because we use postln to print the results.
15.1 Variables

(
var a, b, c;

a = 20; //Store the numbers 20 and 50 in locations "a" and "b"


b = 50;
c = a * b; //multiply a and b and store the result in the variable c
c = c + 20; //add 20 to c and replace what was c with the new result
c.postln; //post with a line return the results so far
a = 5; //put a new value in the variables a and b
b = 6;
c = c + (a * b); //add c to a * b, store result in c
c.postln; //post c with a new line
)

A more concise method is to assign values to variables at the same time as their declaration.
15.2 Variable declaration

(
var a = 50, b = 100, c = 3; //declare variables and assign values

a = (a * b + (c * a))%55; //assign var "a" the result of this expression


a.postln; //post or print the results, now stored in "a"
)

There are many functions included in any programming language. They have been written by
programmers (in this case the author of SC or other musicians working with SC). In the previous
chapter we used max(), min(), midicps, rrand, etc. It is also possible to build your own functions.
The exercises below are a bit academic. You don't often use the .value message in this way. But
by writing your own functions you will better understand how the functions in SC work.

A function is a series of expressions enclosed in two braces; {lines of code}. The entire function
is usually (but not always) assigned to a variable. The lines of code are executed in order and the
results of the last line of code is "returned." When the function is called or run anywhere in the
program it is the same as if all the lines in the function (inside the braces) were inserted in place
of the function. For example, if you wrote a function called "chooseMidi" which returned a
random value between 60 and 72 then each place that function is used a midi value between 60
and 72 will be returned by the function and used in that line of code. A function is evaluated by
using the .value message. Given a function named chooseMidi, a line of code such as
"max(chooseMidi.value, chooseMidi.value)" would first run the function chooseMidi twice, then
use those values in the max() function.

A simple function with no variables and arguments, with its call:


15.3 Function

(
var myFunc;

108
myFunc = {100 * 20};

myFunc.value.postln;
)

The first line declares the variable name that will be used for the function. The second line is the
function assignment ("make myfunc equal to the line {100 * 20}, or store the line {100 * 20} in
myFunc). The function is run at in the third line. Every place you put myFunc in your code the
value 2000 will be used. You might wonder why we are not just using the value 2000. There are
two answers. The first is that the function would normally be more complex than this, returning
different results or streams of results each time. Even so, variables that are assigned a single
static value or functions that always return the same value are useful as place holders. Suppose,
for example, you use the value 10 for the length of a sound. In your patch you may use it 20
times. If you just type 10 in each of those positions in code and you later decide you want to
change it to 20 then you will have to change them all using a replace function or by hand
individually. It is more efficient to declare a variable or function "length" and assigned 10 to that
variable or function. Then use "length" instead of 10 in each of those lines of code. When you
want to change it, you change it once only, at the point of assignment.

Remember that arguments are values passed to a function. They are a list of values between two
enclosures separated by commas. There are two arguments in this line of code; "max(10, 20)".
Ten is the first argument and twenty is the second argument. You can use variables inside the
function, and you can also pass arguments to the function from the outside. (It is easy to confuse
variables and arguments because they are used in code the same way. Variables are declared and
used inside the function. Arguments are passed to the function from the outside. I guess you
could say that arguments are variables that come from the outside.) When you write your own
functions you declare arguments right after the opening brace. Next you declare variables if you
want to use them.

Here is a function with arguments but no variables. Notice that the postln message is included as
part of the function. "func.value" is where the function is run and at that point the arguments 15
and 5 are passed to the function. They are stored in the arguments named "a" and "b" and used in
the function:
15.4 Function with arguments

(
var func;
func = { arg a, b;
b = (b * 20)%a;
b.postln;
};
func.value(15, 5);
)

The same example with a variable.


15.5 Function with arguments and variables

109
var func;
func = { arg a, b; var c;
c = (b * 20)%a;
c.postln;
};
func.value(15, 5);
)

How do you decide whether or not to use arguments and variables? What do you name them?
You can name the variables and arguments anything you want. Whether to use a variable or
argument will depend on context. The distinction should become clearer as you develop patches
and sections of code.

Why would you want to write your own function? All of the examples above would work
without being part of a function. They could be included in your patch just as the lines of code
without the enclosures, the function name, or the arguments. Functions come about for two
reasons; when you use a section of code over and over, either in a single patch or generally all
your patches, and simply for clarity and organization. Say for example that you write a section of
code in a single patch that generates a twelve-tone matrix. This section of code might be useful
several places in the patch. Rather than repeat the code each place it would be clearer and more
efficient to write a function (e.g. "matrixGenerator") and use that single function each time you
need a matrix in code. The other reason, organization, is a matter of choice. A section of code
can be developed in a separate file as a function, then inserted into a patch when it is working
correctly10.

When you write the function you can enter default values for the arguments. When this is done
you can then call the function and give it any of the arguments, or omit any of the arguments. If
you omit the arguments then the defaults are used. The example below shows how to use devault
arguments.

Can you predict the values of the three "myFunc" calls before running this code? Line 7 has no
arguments passed to the function and will use the defaults (10 and 2). Line 8 will use 15 as the
first argument and 2, which is the default, as the second argument. Line 9 will use 11 and 30 for
both arguments and neither of the defaults.
15.6 Function calls

(//line 1
var myFunc;
myFunc = { arg a = 10, b = 2;
b = (b * 100)%a;
b.postln;
};
myFunc.value; //line 7
myFunc.value(15); //line 8
myFunc.value(11, 30); //line 9

10
See the section below describing how to write your own classes. You can create classes from your functions that
will compile when SC starts up. They can then be used just as you would use SinOsc or max, min, and rrand.

110
)

When you add arguments in the function call you have to make sure you have them in the correct
order. In the example above the "a" argument has to be first value message and the "b" argument
has to be second. Had we written myFunc.value(30, 11) then 30 would be used as "a" and 11 as
"b."

Another way to enter arguments is to use keywords. Keyword arguments can put them in any
order, but they have to be preceded by the keyword and a colon. In this example I've given the
arguments more meaningful names: "firstValue" and "secondValue." Try to predict the outcome
of each line before you run this example.
15.7 Keywords

(
var myFunc;
myFunc = { arg firstValue = 10, secondValue = 2;
firstValue = (firstValue * 100)%secondValue;
firstValue.postln;
};
myFunc.value;
myFunc.value(firstValue: 15);
myFunc.value(firstValue: 30, secondValue: 11);
myFunc.value(secondValue: 30, firstValue: 11);
myFunc.value(secondValue: 23);
)

At this point the function still doesn't make a lot of sense because it is only printing these values
to the screen. How do you use a function in a meaningful way within a patch? How can the
function pass the values to other lines of code? The lies in the "return" value, or what the
function returns when it is finished running. The rule for returns is simple in SC. It returns the
last line of code in the function. That line, or the value contained in that line will be used by the
rest of the program. Example 11.8 illustrates a function that returns a value to another section of
code.
15.8 Return

(
var myFunc;
myFunc = { arg firstValue = 10, secondValue = 2;
firstValue = (firstValue * 100)%secondValue;
firstValue; //this value is "returned"
};
(10 + myFunc.value).postln; //the "return" is used here
)

You are now ready to do the assignments. You will use the functions shown below. You might
want to try the functions a few times in a single line of code before inserting them into a
function.

The function "midiratio" returns the ratio of the given midi interval. For example midiratio(7) or
7.midiratio will return the ratio for the interval of an equal tempered fifth (7 half-steps).

111
The function rrand returns a random number within the range of the two arguments. For example
rrand(20, 100) or 20.rrand(100) will return a value between 20 and 100.

112
16. Iteration Using do(), Comments, "post here always"

16 Assignment:

a) Write a do() function that begins with a given frequency, then chooses from a list of ratios
(3/2, 5/4, 4/3, etc. or 1.5, 1.25, 1.33, etc). The frequency is then multiplied by that fraction or
ratio generating a new value for the frequency. Store that value in the variable and print it
out.

The results should be something like this:

1200 * 0.666667 = 800


800 * 0.8 = 640
640 * 1.5 = 960
960 * 1.5 = 1440
1440 * 0.8 = 1152

b) Try writing a nested "do" that chooses a fundamental frequency and a duration (in
seconds between 1.0 and 3.0). Inside that "do" nest another do that generates three ratios.
Those ratios are then used with the fundamental frequency to create a chord. (So in musical
terms you are creating a “pedal,” then three chords above the pedal) Here is a template:

20.do({
var freq, dur;
your code for choosing durations and frequencies
print out those choices
3.do({
var first, second, third;
your code for choosing chord members
})
})

The printout should look like this:

Fundamental and duration = 650 1.38657


Ratios = [ 0.75, 1.25, 0.75 ]
Chord members = [ 487.5, 812.5, 487.5 ]
Ratios = [ 0.75, 0.666667, 0.666667 ]
Chord members = [ 487.5, 433.333, 433.333 ]
Ratios = [ 1.25, 0.8, 0.8 ]
Chord members = [ 812.5, 520, 520 ]

Fundamental and duration = 964 3.70229


Ratios = [ 1.25, 0.75, 0.75 ]
Chord members = [ 1205, 723, 723 ]

113
Ratios = [ 0.8, 0.666667, 1.33333 ]
Chord members = [ 771.2, 642.667, 1285.33 ]
Ratios = [ 1.5, 0.8, 0.8 ]
Chord members = [ 1446, 771.2, 771.2 ]

Fundamental and duration = 284 4.20594


Ratios = [ 1.25, 0.666667, 1.5 ]
Chord members = [ 355, 189.333, 426 ]
Ratios = [ 0.666667, 1.25, 0.666667 ]
Chord members = [ 189.333, 355, 189.333 ]
Ratios = [ 0.75, 0.8, 0.666667 ]
Chord members = [ 213, 227.2, 189.333 ]

Remember that functions take arguments. For example, the rrand function requires two
arguments; low and high ("rrand(2, 100)"). Up until now we have used numbers as arguments to
functions. But some functions require another function as an argument. In these cases the
argument list might look like this: exampleFunc(argOne, argTwo, {entireFuncAsThree}).

Remember a function is a set of operations that are enclosed in braces:


16.1 Function

{
var a, b, c;
a = 20;
b = 10;
c = a*b;
c.post;
}

In practice the function that is used as an argument can be assigned to a variable and passed as an
argument by way of the variable. Or the function can be nested inside the argument list. Here are
both examples.
16.2 function passed as variable

function passed as variable

var myFunc;

myFunc = {
lines of code
};

otherFunc(1, 45, myFunc);

function nested

otherFunc(1, 45, {lines of code})

The "do" function or message is used to repeat a process a certain number of times.

114
prototype:
16.3 do prototype

do(object, {function performed})

Remember that numbers can be objects. So the do() can use a number or an array as its first
argument and another function as the second argument. The first argument, if a number,
represents the number of times the do will repeat. The function is what is repeated. See below for
an explanation of do() using an array as the first argument or object.
16.4 do example

do(5, {"boing".postln;})

In this example the number 5 is being "done" or iterated over. In simple terms it means the
function will repeat 5 times. The function contains just one line instructing the computer to post
the string "boing."

Be sure the second argument is a function by enclosing it in braces. Try example 12.4 without
the braces. Notice the difference.

As I said earlier, comments are a way for a programmer to make notes about what is going on.
The program ignores the comments. You make a comment by using two slashes before the lines
you want the computer to ignore: "//" Everything after the double slashes is ignored. The
program also ignores white space (for the most part). White space is non-printing characters such
as returns, spaces, tabs, etc. This means you can spread things out over several lines. For
example, this line:
abs(57 - 40).midiratio.postln;

can be spread out to:


abs(
57
-
40
)
.
midiratio
.
postln;

I can then add comments to the ends of each line:


abs( //Take the absolute value of the following
57 //the first midi pitch
- //minus
40 //the second midi pitch
) //close the "abs" function; note that I put the close parenth
//at the same vertical plane as the "abs", for clarity
. //the object returned by "abs" is sent the message
midiratio //"midiratio"

115
. //which in turn is sent the message
postln; //"postln;"

Here is one of my worst habits in coding: not writing enough comments. I dig up code I wrote
last year and can't remember anything that is going on. It's a real waste of time. You should
comment profusely.

Commented version of the "do" example above:


16.5 do with comments

do( //The function "do"


5, //the first argument is an object. In this case we are using the
//number 5 as an object. The "do" message knows that the 5 means
//iterate the function 5 times
{ //the opening brace defining the function
"boing".postln; //This function has only one line of code
//"Post the string boing"
} //close the function with a brace
) //close the "do" argument list with a parenthesis

"5" is the object to do, which in this case means repeat 5 times. The function that is being
performed five times is just a single line; "boing".postln.

To me it makes more sense using receiver notation where the first argument, the 5, is the object
or receiver and do is the message. Ex. 12.6 is equivalent to Ex. 12.5.
16.6 do in receiver

5.do({"boing".postln;})

By now you may be a little frustrated with the post messages because they are printed to the
bottom of the page of code you are working on. A solution to this is "post here always." This
tells the program to post to the current window (the window that is open when you choose "post
here always") all the time. To do this, open a new window, select "post here always" from the
language menu. The results of all these functions will post to the open window, leaving your
code page clean. Try it with the examples above.

For the assignments in this chapter you will want to use the .choose message. An array of values
understands the .choose message, such that "ratio = [3/2, 4/3, 5/4, 2/3, 3/4, 4/5].choose;" will
store one of those ratios in the variable "ratio."

116
17. Control Using if(), and do() continued, Arrays

17 Assignment
a) Write an example of do() with a function that uses an if(). The function chooses a random
midi number between C4 (60) and C5 (72). If the new choice is different from the previous
choice it is printed. If it is the same as the previous choice, "repeat" is printed. For example,
if the previous choice was 60 and the current choice is 60, the program prints "repeat", but if
the current choice is anything but 60, it prints that number. Here is a template:

var currentMidi = 60, previousMidi = 60;

100.do(
{
code to choose a new value and compare with previous value
}
)

Control message "if"

So far we've learned about declaring variables, functions and arguments, assigning values to
those variables and we've written some functions. But the logic in computer programming
(telling the machine what to do) lies in control statements nested in do statements. There are
several similar methods (such as while(), for(), forBy()), but do() and if() are the most common.
"Do" puts the computer in motion, telling it to perform a task a set number of times, but "if" tells
it what to do. It is the foundation of artificial intelligence.

The "if" message or function uses an expression to be evaluated which returns a true or false
(remember the 10 < 20 examples which returned "true" or "false"? here is where they are used)
followed by two functions. When the expression evaluation is "true" then the first function is
performed. If false, the second function is performed.

Here is a prototype:

if(expression, {true function}, {false function})

The expression which is evaluated must be "Boolean", which means either true or false. The true
or false often results form a comparison of two values separated by an operator such as "<" for
less than, ">" for greater than, "==" for equals, etc. (Note the difference between "==" and "=."
"=" means store this number in the variable, "==" means "is it equal to?"). Run both examples of
code below. The first one evaluates a statement which returns "true" (because 1 does indeed
equal 1) so it runs the first function ("true . . "). The second is false, so the "false" function is run.
17.1 if examples

if(1 == 1, {"true statement".postln;},{"false statement".postln;})

if(1 == 4, {"true statement".postln;},{"false statement".postln;})

117
Commented:
17.2 if commented

if( //Begin the if statement


1 == 1, //expression to be evaluated; "1 is equal to 1" true or false?
{"true statement".postln;}, //if the statement is true run this code
{"false statement".postln;} //if it is false run this code
)

Here are other Boolean operators


< less than
> greater than
<= less than or equal to
>= greater than or equal to
!= not equal to
== equal to

"or" combines two statements, returning true if either are correct; or(a > 20, b > 100)

"and" combines two statements, returning true only if both are correct; and(a > 20, b > 100).

These examples use numbers. They don't seem to make sense out of context (in other words,
why would I ever use the expression 10 > 0? and why would you just post "true statement" or
"false statement"), but in real code you use variables that are constantly changing while the
program runs. For example, suppose you are using the variable midiPitch that is being
determined based on a previous midi pitch and a random interval choice. In this case you may
want to check it each time to make sure it doesn't exceed a particular value (e.g. 60). In this case
you would insert the line below to check and change, if necessary, midiPitch.

if(midiPitch > 60, {change it}, {don't change it})

It can be written in receiver notation;

expression.if({function},{function})

But I prefer the function notation "if(true . . ." because it looks the way it is said "if this is true do
this, if not do this."

Here are some simple examples.


17.3 if examples

if(10 > 0, {"true statement".postln;},{"false statement".postln;})

if(10 < 0, {"true statement".postln;},{"false statement".postln;})

if(20 == 30, {"true statement".postln;},{"false statement".postln;})

if(or(10 > 0, 10 < 0),


{"true statement".postln;},{"false statement".postln;})

118
if(and(10 > 0, 10 < 0),
{"true statement".postln;},{"false statement".postln;})

if(or(10 > 0, or(5 < 0, 100 < 200)),


{"true statement".postln;},{"false statement".postln;})

You occasionally use a lone, single if statement like the ones above. But more often "if"
statements are used during loop iterations such as "do" to test the results of that iteration. They
are the bread and butter of programming.

The do function or message is used for iteration (a line of code that repeats). The first argument
is the item that is being "done" or repeated over, the second argument is the function that is being
repeated. There are a number of objects that can be "done," or that understand the do message.
But most often the object or argument is a number.

The prototype is "do(object, {function})" or in receiver notation "object.do({function})". Here


are two simple examples in both functional and receiver notation.
17.4 10.do

do(10, {rrand(60, 72).postln})

10.do({rrand(60, 72).postln})

do(100, {max(30, 100.rand).postln})

100.do({max(30, 100.rand).postln})

Two values generated by the do can be passed to the nested function. These two values are the
items being "done" and a counter that keeps track of each iteration. The two values are passed by
declaring them as arguments at the beginning of the nested function. You can name them
anything you want.
17.5 do(10) with arguments

do(10, {arg eachItem, counter; eachItem.postln; counter.postln})

You are going to scratch your head now, but stick with me. In the case of the 10.do, the first and
second argument are the same, because the 10.do is iterating over the object "10", so it "does"
the numbers 0 through 9 in turn (computers always begin counting with 0, not 1). And the
counter is moving from 0 through 9 as it counts each iteration. They are the same, so why have
both? Using an array as an object makes the distinction clearer. The do iterates over each item in
the array.
17.6 do([array] with arguments

[10, "hi", 12.56, [10, 6]].do( {arg each, count; each.postln; count.postln})

Yes, that is an array within an array. [10, 6] is an array, and is the last item in the array [10, "hi",
etc.]. In this example the do "iterates" over the array passing each item in the array to the nested
function as "each."

119
Now to combine the do() with the if(). The following program iterates through 10 items (the
numbers 0 through 9) and tests each one to see if it is less than 5. If it is, it prints "< 5 boing" if
not, a "not < 5 boing." Why does it seem to change to "not" boing on the sixth iteration rather
than the fifth? Shouldn't it change on the fifth, since 5 < 5 is a false statement? (Remember that
computers count beginning with 0, not 1. This is a common error.)
17.7 10 boings

(
10.do(
{arg item;
if(item < 5, //Boolean test
{"< 5 boing (true)".postln;}, //True function
{"not < 5 boing (false)".postln;} //False function
)
}
)
)

Same example, but using an array as the object.


17.8 array of boings

(
[1, 8, 3, 87, 2, 4, 100].do(
{arg item;
if(item < 5, //Boolean test
{item.post; " is < 5 (true)".postln;}, //True function
{item.post; " is not < 5 (false)".postln;} //False function
)
}
)
)

Below shows iteration over an array of pitch classes.


17.9 pitch class do

(
["C", "C#", "D", "Eb", "E", "F"].do(
{arg item, count;
if((item == "C#").or(item == "Eb"), //Boolean test
{item.post; " is a chromatic pitch.".postln;}, //True function
{item.post; " is a diatonic pitch".postln;} //False function
)
}
)
)

You might say we have taught the computer what chromatic pitches are. This is where AI begins.

One nice trick with an iteration and an "if" statement is to control when to "postln", which
includes a new line rather than a "post", which does not print a new line:
17.10 new line

120
(
100.do(
{
arg index, count; //name the two arguments. We won't use
//"index", but we have to name it to get to "count."
if(count%20 == 19, //Every 19th time the true statement will
//be performed because 19%20 = 19, 49%20 = 19
//69%20 = 19, etc, every other time the false
//statement is performed.
{" new line: ".postln;}, //print a carriage return
{" * ".post;} //just " * " without a return
)
}
)
)

The function can also contain variables that remain active throughout the iteration. Here is a
function that starts with a midi pitch of 36 (C2), then chooses a midi interval and adds that to the
current midi pitch. The next time the function runs another midi interval is chosen and is added
to that pitch, and so on, creating an arpeggio of only the given intervals. There is an if statement
that insures the values stay within a four octave range.
17.11 new line

(
var pitch = 36, nextInt = 0;
100.do(
{
nextInt = [4, 5, 7, 9].choose;
nextInt.postln;
pitch = pitch + nextInt;
if(pitch > 84, {pitch = pitch%12 + 36});
pitch.postln;
}
)
)

Just For Fun

So far we've only talked about programming techniques. I'm sure you've been wondering how
this all relates to actual sounds. We will get to the synthesis portion of SC a few chapters down,
but I can't help jump ahead and show you how to plug this idea into an actual patch. I don't use
the .do, but I do use the function. The [4, 5, 7, 9] are the intervals being chosen (M3, P4, P5,
M6). Try changing them to see if the character of the arpeggios changes. Can you recognize, for
example, the difference between consonant choices (e.g. [5, 7, 12]) and dissonant intervals (e.g.
[1, 3, 6, 11])?
17.12 Just for fun, arpeggios (scope, Spawn, Pan2, SinOsc, midicps, EnvGen,
Env, perc, rand2)

var pitch = 48;

Synth.scope(

121
{
Spawn.ar(
{
pitch = pitch + [4, 5, 7, 9].choose;
if(pitch > 108, {pitch = pitch%12 + 48});
Pan2.ar(
SinOsc.ar(pitch.midicps, mul: 0.4)*
EnvGen.kr(Env.perc(0, 0.3)), 1.0.rand2)
}, 2, 0.125)
})

122
18. Collections, Arrays, Array Messages

18 Assignment

a) Write code that declares four variables. Assign each variable to an array. Each array
contains four midi values that define a chord (e.g. MM, Mm, mm, etc.). Then set up a do()
function that in each iteration transposes all four arrays to a randomly chosen chromatic
pitch within an octave. But all values must remain within the octave.
Example:

var c1, c2, c3, c4;


c1 = [0, 4, 7, 10];
c2 = [0, 3, 7, 10];
c3 = [0, 4, 7, 11];
c4 = [0, 3, 6, 9];

10.do({
code that chooses values between 0 and 11 for each array transposition and transposes that
array
})

The output should look something like:

Chord, transposition choice, transposed chord:


c1, 5, [5, 9, 0, 3];
c2, 2, [2, 5, 9, 0];
c3, 10, [10, 2, 5, 9];
c4, 1, [1, 4, 7, 10];
b) Write a function that transposes an array of random midi pitches by each item in the array
while remaining within one octave. You may want to try an array of all 12 pitches
representing a 12 tone row.

If the array is [0, 4, 2, 3, 6, 5, 8, 7, 1, 10, 9, 11] then the output would be:

[ 0, 4, 2, 3, 6, 5, 8, 7, 1, 10, 9, 11 ] //transposed by 0
[ 4, 8, 6, 7, 10, 9, 0, 11, 5, 2, 1, 3 ] //transposed by 4
[ 2, 6, 4, 5, 8, 7, 10, 9, 3, 0, 11, 1 ] //transposed by 2
[ 3, 7, 5, 6, 9, 8, 11, 10, 4, 1, 0, 2 ] //etc.
[ 6, 10, 8, 9, 0, 11, 2, 1, 7, 4, 3, 5 ]
[ 5, 9, 7, 8, 11, 10, 1, 0, 6, 3, 2, 4 ]
[etc.]

A collection or array is a group of items. Arrays are enclosed in brackets and each item is
separated by a comma. Here is an array of integers.

123
[1, 4, 6, 23, 45]

You can have arrays of strings. (A "string" is a group of characters that the computer sees as a
single object.)

["One", "Two", "Three", "Four"]

Or you can have a mixture. Note the "34" is not understood by SC as the integer 34, but a string
consisting of a character 3 and a 4. But 1, 56, and 3 are integers.

[1, "one", "34", 56, 3]

Entire arrays can be assigned to a variable, and an array understands the postln message:
18.1 post array

(
var a;
a = [1, 24, "forty", 5.4, "last"];
a.postln;
)

You can also perform math on entire arrays. That is to say, the array understands math messages.
18.2 array math

(
a = [1, 2, 3, 4]; //declare an array
b = (a + 12)*10; //add 12 to every item in the array, then multiply them
//all by 10 and store the resulting array in b
b.postln;
)

Notice in this example I did not declare the variables "a" and "b." Lower case single letters are
understood by SC as variables and don't need to be declared.

Can you predict the outcome of each of these examples?


18.3 array.do and math

(
a = [60, 45, 68, 33, 90, 25, 10];
5.do(
{
a = a + 3;
a.postln;
}
)
)

18.4 array + each item

(
a = [60, 45, 68, 33, 90, 25, 10];

124
5.do(
{arg item;
a = a + item;
a.postln;
}
)
)

This is a little harder; predict the outcome of the example below before running it. I'm using two
arrays. The first is stored in the variable "a" and used inside the do function. The second is the
object being used by the do function. So the "item" argument will be 2 on the first iteration, 14
on the second, 19, and so on.
18.5 two arrays

(
a = [60, 45, 68, 33, 90, 25, 10];
b = [2, 14, 19, 42, 3, 6, 31, 9];
b.do(
{arg item;
item.post; " plus ".post; a.post; " = ".post;
a = a + item;
a.postln;
}
)
)

It is also possible to test an array for the presence of a value, or test it with a function, or add
things, remove things, sum all items, reverse, scramble, etc. The message we need to use is
.includes(). This message answers true if the array contains an item or object. It takes one
argument; the object you are looking for. So that [1, 2, 3, 4].includes(3) will return a 'true' and [1,
2, 3, 4].includes(10) will return a 'false.' These true or false returns can be used in an if() function
(described in the previous chapter).
18.6 testing an array

(
a = [60, 45, 68, 33, 90, 25, 10];
b = [25, 14, 19, 42, 33, 6, 31, 9];

100.do(
{arg item;
if(a.includes(item), {item.post; " is in a ".postln});
if(b.includes(item), {item.post; " is in b ".postln});
}
)
)

You can now do the assignments.

125
19. Strings, and Arrays of strings, the .at() message

19 Assignment

a) Write a function or code segment that has as an argument an array of midi numbers that
represent a twelve-tone row. The function will return a twelve-tone matrix of pitch class
strings (not numbers).

Example:

var matrix, row;


row = [0, 11, 10, 2, 10, 9, 3, 4, 8, 5, 7, 6];
matrix.value(row);

and would return

C B Bb C# A Ab D Eb G E F# F
C# C B D Bb A Eb E Ab F G F#
D C# C Eb B Bb E F A F# Ab G
B Bb A C Ab G C# D F# Eb F E
Eb D C# E C B F F# Bb G A Ab
E Eb D F C# C F# G B Ab Bb A
Bb A Ab B G F# C C# F D E Eb
A Ab G Bb F# F B C E C# Eb D
F E Eb F# D C# G Ab C A B Bb
Ab G F# A F E Bb B Eb C D C#
F# F E G Eb D Ab A C# Bb C B
G F# F Ab E Eb A Bb D B C# C

One of my early instructors pointed out that nearly 80% of programming is user interface. In
other words, it's a piece of cake getting a computer to generate a 12 tone matrix, but getting the
row from the user and communicating the resulting matrix to the user is where all the
programming is used. Writing a program that only generated MIDI integers would not be very
useful to most musicians because many musicians don’t even know what MIDI numbers are. In
order for such a scheme to be useful to real musicians someone would have to translate the
numbers into strings that represent pitch classes. Or even better they would notate the midi
choices as note-heads on a staff. The computer understands only numbers, we understand both,
but are more comfortable with pitch class. (It may sometimes seem that the computer
understands text or strings, but in reality it translates strings such as "boing" into integers that
represent each character. So it really understands them as numbers.)

A string is a group of characters that are contained in an array. The last item in the array is a
"terminating" 0. The 0 indicates the end of the string. The way you write strings in SC is to
enclose a word or series of characters in quotes such as "this is a string". But internally SC

126
creates an array of 16 elements. The first element is the ascii equivalent (see below for a
discussion of ascii numbers) for "t," the second element in the array is the ascii number for "h,"
and so on. The last element is a 0. This all happens behind the scenes, so all we need to
remember is that text contained inside quotes is a string, and you can have an array of strings.

Earlier we used math on an array. If you wanted to transpose a set of pitches, you might think
you could transpose a pitch class string by just adding it to a number, (e.g. "C#" + 5), as in the
example below:
19.1 "C" + 5?

(
a = ["C", "D", "E", "F", "G"];
a = a + 5;
a.postln;
)

This example runs, but the results are not what we wanted. What we want in the assignment is
for the program to generate numbers, but print strings.

One method for translating numbers into strings or text is to put the strings in an array and
reference them using the "at" message. The "at" message allows us to refer to a specific item in
an array using an index number. Given the array ["C", "D", "E", "F", "G", "A", "B"] we could
say that D is in array position 1, F is in position 3 (remember, computers start numbering lists at
0), C is in position 0. To refer to a specific item in an array you use the syntax array.at(n), where
n is the index number.

Here is an array of "strings,"11 and a line of code posting one of the strings:
19.2 pitch array index

(
a = ["C", "D", "E", "F", "G"];
a.at(3).postln; //post item at index position 3 in the array a
)

Why did it print F and not E?

So if I pick a random number then rather than just print that number I can use it as an index for
an array and print the string in the position of that number. The result is random pitches pitch
class strings instead of numbers. The second example nests this idea in a do() message.
19.3 random pitch array

(
a = ["C", "D", "E", "F", "G"];
a.at(5.rand).postln;

11
Even though there is only one character in each string it is still called a string. But these examples do actually have
two items; the character, then the terminating 0.

127
)

(
a = ["C", "D", "E", "F", "G"];
50.do({a.at(5.rand).postln;});
)

There is one danger when using the .at message. It is called a wild pointer. A wild pointer is a
reference to a position in an array that doesn't exist. If an array has 6 elements and you tried to
reference an item using an index number 20, you would get a nil because there are only 6 items.
In a sense you end up pointing to either blank space in memory or inappropriate data (that is
beyond the scope of the array). You may have seen a computer program print gibberish on the
screen. The bizarre characters are usually the result of a wild pointer. One method for insuring
you pick a value no larger than the size of the array is to use the .size message, which returns the
size of any array. The code "a.size" will return the size of the array "a" so that "a.size.rand" will
return a random number between 0 and the size of the array.

Here is a short program that picks random numbers and prints the value from an array of strings
that represent pitches. See if you can read the code on your own before going through the
commented version:
19.4 random pitch class

(
a = ["C", "D", "E", "F", "G", "A", "B"]; //pitch class array
"count, random pick, pitch at index:".postln; //header
10.do( //do 10 items
{arg item, count; //use arguments item and count
var pick;
pick = a.size.rand;
count.post; " ".post; //print the number of this iteration
pick.post; " ".post; //print the number I picked
a.at(pick).postln; //print the item at that array position
}
)
)

You can save on .postln messages by using the concatenate message "++." This is used to put
two strings together. So "this " ++ "string" will be "this string":
19.5 concatenated string

(
a = ["C", "D", "E", "F", "G", "A", "B"];
10.do(
{arg item, count;
var b;
b = a.size.rand;
("Item " ++ count ++ " : " ++ b ++ " = " ++ a.at(b)).postln;
}
)
)

128
Another method for combining print messages is to put all the variables in an array and print the
array. The above example could be written as ["Item", count, "choice", b, "pitch", a.at(b)].postln.
Run it several times to confirm it is indeed choosing different values each time.

Here is a more concise version. It does pretty much the same thing, but is more elegant. I enjoy,
and maybe spend too much time with, being clever and efficient when writing code. Some
programmers use the sledge hammer method to get things done. That is to say they slam in the
code however they can. There are often times I resort to a sledge hammer, especially when first
putting things together, but I also love finesse.
19.6 finesse

do(10, { ["C", "D", "E", "F", "G", "A", "B"].at(6.rand).postln;})

The only difference between these two examples is the user interface. In the top example I
spread out the code to make it clear to me (the programmer) which variables are which and break
the operation into several steps for clarity. I also print a lot of extra stuff so that it is clearer to the
user what is going on (the count, the midi number chosen, and the pitch class). In the lower
example I streamline it a lot by omitting variables and replacing them with nested statements and
fewer printing messages.

You should be able to do the matrix now.

129
20. Making music

20 Assignment

a) Write a function that will return a series of MIDI pitches in a pattern that might be
described as minimalistic. Plug that function into this model:

(
yourFunction

Synth.play({
SinOsc.ar( Sequencer.kr(
yourFunction, //pitch function
LFPulse.ar(8) //trigger
), mul: 0.2) //volume
}) )

Review of language:

If you're comfortable with the language, skip this part.

If you were to summarize the coding side of SC, I think you could say that it consists of
statements that are combined as instructions for the computer to do something. The instructions
usually include messages to objects, with an argument list and variables. Below is a prototype of
a message, object, and argument list:

Object.message(arg1, arg2, arg3);

Sometimes one of the arguments is a function, so that the actual code will often look like this:

Object.message({lines of code;}, {lines of code;}, arg3);

You can "nest" lines of code and messages to objects, and in this way you build up programs that
run the sequence of code where the inner parts of parentheses, brackets, and braces are run first,
followed by the outer. I've added a variable declaration and I've spread this out.
var one; //line 1
one = 5;
Object1.message(
{ //line 4
Object2.message(
{ //line 6
Object3.message(one)
}, //line 8
arg1
),
}, //line 11
arg2,
arg3

130
);
//end patch

People who code become very accustomed to reading this nested style. They can see that on the
first line Object3 message uses "one" as its first argument, and that Object3 message is inside a
function (lines 6 through 8), and that the function on lines 6 to 8 is the first argument for Object2
(line 5), arg1 (line 9) is the second argument, and that Object2 is the first part of another function
(lines 4 to 11), which in turn is an argument in Object1 message (line 3), and that arg2 and arg3
(line 12, 13) are the other arguments in Object1 message.

If you were to translate this into English you might say "first reserve space for values and call it
"one." Store the number 5 in that variable ("one"). Then run the code for Object3 first, using the
variable "one" (or 5, which is currently stored in "one") as the first argument. Run Object2 with
as arguments the return, or results, of the function containing Object3 and arg1 as the second
argument. Run the code for Object1 using the return, or result of the function containing Object3
for the first argument and arg2 and arg3 for the other arguments."

The message usually tells the object what to do and the arguments tell it how to do it. "Return" is
a term used in coding to mean the value that results from a function. We say return because the
lines of code are run during the function and at the end of the function some value results from
those lines of code and is returned to the lines of code wherein it is enclosed.

Array messages

The next assignment will work with arrays, so here is a review of the messages an array
understands (these are all documented in ArrayedCollection).

[1, 2, 3, 4]

is an array
20.1 arrays messages

a = [1, 2, 3, 4]; //assigns the array to the variable "a"

a.post; //prints the array

a + 5; //adds five to each item in the array

a*3; //multiplies it, etc.

a.do({arg item; function}) //iterates over each item passing each item
//to the function

a.at(index) //refers to item at index number

// Here are some new ones. Run each of them to see what they do:

[1, 2, 3, 4].reverse.postln; //reverses the array

[1, 2, 3, 4].rand.postln;

131
[1, 2, 3, 4].scramble.postln; //scrambles the array

[1, 2, 3, 4].size.postln;// returns the size (number of items)

Array.fill(size, function); //fills an array with "size"


//number of arguments using function

a = Array.fill(5, {10.rand}); a.postln;

[1, 2, 3, 4].add(34).postln; //adds an item to the array

//Note about add. You have to pass it to another array variable to


//make sure the item is added. So the code would have to be:

a = [1, 2, 3];
b = a.add(10);
a = b;

[1, 2, 3, 4].choose; //chooses one of the values

[1, 2, 3, 4].put(2, 34).postln; //puts second argument at


//index of first argument

[1, 2, 3, 4].wrapAt(index) //returns item at index with a wrap

//example:

30.do({arg item; [1, 2, 3, 4].wrapAt(item).postln});

There are a lot of ways you can use and index an array in this assignment. I'll do a few examples
at the end.

A Moment of Perspective.

The whole point of this course is to generate music and to use the computer as a servant or mule
to do all the boring detailed parts so we can get on with the creative fun part. So far the only
result of our work has been the printed output. We haven't gotten very far in regard to playing
music. Even so, we could stop here with what you have learned and it would be useful in
composition exercises. For example, suppose I wanted to compose a work with a random walk
for pitch, next note, duration, and amplitude. I could use the tools we've covered so far to print
out the midi number, the time for the next note, the duration of this note, and amplitude for this
note. That's still a pretty good advance over doing it by hand.

But forty years ago I couldn't use a computer. I had to pick each value one at a time using a
random process, write it down, and then transcribe that onto manuscript. The only difference
with what we have learned so far is that I use the computer to pick the random values and print
them out. I still have to transcribe it to manuscript, find performers willing to try the music, have
them rehearse it, get a mediocre and inaccurate performance, then go back to the drawing board
for another try. Even if it came together well would it truly be random if I wrote it down?
Doesn't a random process imply that it is different every time? And if I wanted to try it again
with another random sequence the turn-around would take about two months. Forty years ago a
composer was also limited by the capacity of the performer. He had to discard ideas because the

132
performers were not capable of playing accurately, fast enough, high enough, etc. (in the case of
micro-tones, for example).

In the early days of computer music, composers started to do what you can do now; get the
computer to at least do the numbers. But it took four pages of code or hundreds of punch cards.
With SC it takes one line. Here is such an example (the first value is duration in eighths, the
second is time until next in eighths, the next is midi pitch, the next is volume):
20.2 Illiac suite?

60.do({[8.rand, 8.rand, (60 + 24.rand), 10.rand].postln;})

That's quite an advance over how Hiller did it in the 50s. But I'm still stuck transcribing the stuff
into manuscript and getting someone to play it. Wouldn't it be nice if I could crunch the numbers
and get the machine to play the results? Even if my final product is intended for live performance
by real musicians, I can use the computer for trials.

Ten years ago I could do just that on a mainframe in Illinois and get the cpu to generate actual
sounds, but it took overnight. Last year (2000) that amount of time was cut down to 5 minutes,
but that still took pages of code and two separate programs (one to crunch the numbers, another
to generate the sounds). Today it's in real time, and it takes about ten lines of code.

Actual Music Examples: How SC Turns Numbers into Sounds.

SC has scads of synthesizer modules, or objects and messages, or we often call them UGens
(Unit Generators). There is a list of Ugens in the Appendix. We've covered enough about coding
to allow you to read through the documentation and figure out what they can do. You patch them
together much like you would a vintage modular synthesizer. The difference is you can also
include logical expressions and random events.

The most important object is Synth, which understands the message "play." If you looked in the
documentation you would see:

Synth

*play(ugenGraphFunc, duration)

(By "look in the documentation" I mean highlight the word "Synth" and hit command-h.)

This means that when I send the play message to Synth I have to give it a function as the first
argument, and a duration as the second. The function generates a graph representing signal that
can be played.

Here is another UGen: SinOsc. It understands the message "ar", which means generate a graph
representing a sine wave at an audio rate. Highlight SinOsc and type command-h. You see this:

SinOsc.ar(freq, phase, mul, add)

133
It generates a graph if it is enclosed in a function, and is therefor a ugenGraphFunc, so I can use
it as the first argument in Synth.play, which requires a ugenGraphFunction. The arguments for
the .ar message when sent to SinOsc are frequency, phase, multiply, and add.

Now I can plug the SinOsc object with it's message "ar" into the first argument of the play
message inside a function and use it as the first argument for Synth.play. Can you predict what
this line of code will do? (Remember that if I leave off some of the arguments defaults are used
instead.)
20.3 Synth

Synth.play({SinOsc.ar(200, 0.0, 0.5)}, 4)

Spread out with keywords:


20.4 Synth.play

Synth.play(
ugenGraphFunc: {
SinOsc.ar(
freq: 200,
mul: 0.5
)
},
duration: 4
)

The next two unit generators I use are a Sequencer and an LFPulse. The documentation for these
two look like this:

Sequencer.ar(sequence, trig, mul, add)

LFPulse.kr(freq, mul, add)

The LFPulse.kr generates a pulse at a given frequency that can be used as a trigger. I'm going to
"plug" the return of the LFPulse into the trig argument for the Sequencer. This tells the
sequencer when to trigger a new value. Then I'm going to "plug" a random number into the
sequence argument for the sequencer. Finally I will "plug" the results of the sequencer into the
freq argument of the SinOsc. Here is another trick: up until now we have had to select all the
lines of code using the mouse to double click or click-drag. You can also use command-` from
anywhere within the code to match parentheses. Continue hitting command-` until the entire
example is shaded.
20.5 Synth with keywords

(
Synth.play(
ugenGraphFunc: {
SinOsc.ar(
freq: Sequencer.kr(
sequence: 400 + 200.rand,
trig: LFPulse.ar(8)
),

134
mul: 0.5
)
}
)
)

If you try the example you'll notice that it's not what we expected. The sequence should be a
series of random values. I only get one sustaining value. It is indeed getting a trigger 8 times per
second, but why no random value each time the trigger is sent to the Sequencer?

It's because I didn't enclose the 400+200.rand inside a function.

Notice the difference in this code, then run it and hear the difference. Notice I've also removed
the keywords, which are not necessary because all these arguments are in the correct order.
(Except for "mul", which is out of place, so I have to use the keyword for it.)
20.6 Random sequencer

(
Synth.play(
{
SinOsc.ar(
Sequencer.kr(
{
400 + 200.rand
},
LFPulse.ar(8)
),
mul: 0.5
)
}
)
)

Remember to comment!
20.7 Commented Synth random sequence

//A simple patch that sequences random frequencies. It uses a Synth.play, the
//first argument is a SinOsc. For the frequency of the SinOsc I'm using a
//Sequencer.
(
Synth.play(
{
SinOsc.ar(
Sequencer.kr(
{ //Function for choosing a random number
//between 400 and 600.
400 + 200.rand
},
//Trigger for Sequencer is an LFPulse at
//8 times per second.
LFPulse.ar(8)
),
//The mul argument is volume.
mul: 0.2

135
)
}
)
)
//End patch

Here's where it comes together. We have been learning how to write functions that generate
values. Now you have a synthesizer that uses functions. Suppose you had previously written a
function called myFunc that returned, or generated random values. You could plug it into the
synth.

SinOsc.ar(
Sequencer.kr(
{
200 + 800.rand
},

Instead of the lines above you could write:

myFunc = {some code for generating frequencies};


SinOsc.ar(
Sequencer.kr(
myFunc,

You can use all of the skills you've learned so far in a function and plug it into the synth model.
Here is a simple example:
20.8 pitchFunc

//A simple patch that sequences random frequencies. It uses Synth.play, the
//first argument is a SinOsc. For the frequency of the SinOsc I'm using a
//Sequencer.
( var pitchFunc, midiNote; //declare variables
midiNote = 50; //set first midi note
pitchFunc = {
midiNote = midiNote + (3 + 5.rand); //increase midi note by a number
//between 3 and 7
if(
midiNote > 100, //check to see if midi is too high
{
//if it is, reset it to a value between 60 and 64
midiNote = 60 + 5.rand;
}
);
midiNote.midicps; //return midi converted to cps
};
Synth.play(
{
SinOsc.ar(
Sequencer.kr(
pitchFunc,
//Trigger for Sequencer is an LFPulse at
//8 times per second.
LFPulse.ar(8)

136
),
//The mul argument is volume.
mul: 0.2
)
}
)
)

Example Functions

Here are some other functions you can try. They should all be inserted into the patch above
(replacing the existing pitchFunc).
20.9 Pitch functions

//////////
var midiNote, pitchFunc, pitches;
midiNote = 60; //initialize first note
pitches = [60, 61, 62, 63, 64]; //declare an array of pitches
pitchFunc = {
midiNote = pitches.choose; //pick a pitch from the array
midiNote.midicps; // return the cps for that pitch
};

////////
var midiNote, pitchFunc, pitches, count;
midiNote = 60; //initialize first note
pitches = [60, 62, 64, 67, 69, 72, 74, 76]; //declare an array of pitches
count = 0; //initialize count
pitchFunc = {
midiNote = pitches.wrapAt(count); //midiNote is wrapped index of count
if(count%30 == 29, //every ninth time
{pitches = pitches.scramble} //reset "pitches" to a scrambled
//verion of itself
);
count = count + 1; //increment count
midiNote.midicps; //return cps
};

///////
My favorite:
pitchFunc4 = {
midiNote = pitches.wrapAt(count); //set midiNote to wrap index of count
if(count%10 == 9, //every tenth time
{pitches.put(5.rand, (65 + 10.rand))}//put a new pitch between
//65 and 75 into the array pitches
//at a random index
);
count = count + 1; //increment count
midiNote.midicps; //return cps
};

Now write your own.

137
21. More Random Numbers

21 Assignment
a) Draw a probability graph for the following functions.

min(rrand(10, 20), rrand(10, 20)

[1, 1, 1, 1, 2, 3, 4, 4, 5, 6]

max(10.rand, 10.rand)

max(5.rand, 10.rand)

(20.rand + 20.rand)/2

b) In the patch below replace the number, range, freq1 with biased random functions.
Modify or replace the nextFreq function. Try a narrowly focused random choice, such as
{freq*([2, 3, 4, 5, 6, 7, 8].choose)}

// clustered sines
Synth.scope({
XFadeTexture.ar({
var nextFreq, freq1, freq2, freqArray;
var number, cluster, range;
number = 80;//number of overtones
range = 4.0;//range, in octaves, of overtones
freq1 = 300;//initial frequency
freq2 = range * freq1;
//new frequency function
nextFreq = {rrand(freq1, freq2)};
cluster = Array.fill(2, {
`[freqArray = Array.fill(number, nextFreq ),
freq1/freqArray,
nil]
});
Klang.ar(cluster, 1, 0, 0.3/number);
}, 4, 4, 2)
})

In this section we will talk about random numbers, random walks, biased random walks, etc. Let
me begin by saying that I rarely write random music. I don't intend to teach you how to do
random music. It's not very interesting. But random number generators are at the heart of

138
artificial intelligence. What I hope to illustrate is how you can teach a computer to improvise by
allowing it a range of choices.

Random Functions in SC

Please return to section one and reread or review the discussion on random numbers.

Filters

Every computer music text I've read talks about random sieves. The theory is you generate
random values and filter out those that don't meet your criteria for the composition. For example,
suppose you are choosing values for the flute, and can only accept values between its range; C4
to about C7. Filtering is a process whereby you choose a large number of random values and
then filter them with a set of criteria, discarding those that don't match your needs. In other
words, for the flute, you keep choosing random numbers until one falls into that range.

This seems very inefficient to me. It would be much easier to do a biased random choice: pick a
number between C4 and C7. I guess a purist would say that I'm tainting the possible outcomes by
beginning with a biased set. Others would argue I'm filtering by biasing the set, but filtering as it
is taught in existing texts has never made sense to me. Biased choices are more useful and
efficient. (Please correct me if you disagree.)

Biased Random Choices

The more efficient method for restricting choices is to bias your choice toward a set of values.
We've already learned to do this in some contexts. When we write the code "rrand(60, 72);" we
are in essence saying "pick a number between 60 and 71." So we have biased the choice to fall
between those values.

Another way to bias a choice is to weight the choices more toward one value, or a set of values.
The weight of each possibility is usually expressed as a value between 0 and 1. In the example
above there is an equal probability that any value between 60 and 71 is chosen (these are
integers, not imaginary numbers), so each individual value has a 1/12, or 0.08333 chance of
being chosen. (The probability should always add up to 1: 0.08333 * 12 = 1.0.)

The probability distribution is usually described as a graph.

139
Likewise, in the code "[60, 65, 69, 54].choose" each value has a 0.25 chance of being chosen,
since there are four values (0.25*4=1.0).

It is often desirable to bias the choice toward some value. One method, which is easy to see
visually, but not very flexible, is to stuff the ballot box, or load the dice:
21.1 bias

[60, 60, 65, 69, 54].choose

This code still divides the random choice between all elements in the array (each has a 0.2 or
1/5th chance of being chosen), but since 60 is entered twice there is a 0.4 chance of 60 being
chosen.

But this method would not work with the line "10.0.rand." In this case the choices are not
discrete integers, but rather floating point values. The chart showing these probabilities would
have a straight line across the top, indicating that any value between the low (0.0) and high
(10.0) number has an equal chance of being chosen.

How would you make a biased choice in this case? Since the program is picking floating point
values (billions of possibilities), it would be impractical to load the choices by entering them all
into an array. The solution lies in mathematical formulae; letting the program pick several
numbers but using math to bias towards one choice or another.

At the end of The Price is Right there is a good example biased random choices. Since there are
three people spinning off at the end of the show, each one trying to get the highest number, there
is a greater chance that the winning numbers are above 50 than below 50. Each player is trying to
get a higher number, and if you get a low number you will try again for something higher,
especially if someone has already spun a high number. The results are biased toward the higher
numbers because of the game rule that the highest person wins.

Here is another example: Imagine you wanted only one random number between 1 and 3, but
you had two 3 sided dice. You roll both dice each time but you use the lowest of the two
numbers. The possible combinations of both dice are:

1:1, 1:2, 1:3, 2:1, 2:2, 2:3, 3:1, 3:2, 3:3

140
But since the final value is the lower of the two there will be a greater number of pairs that
"return" a 1:

1:1 = 1, 1:2 = 1, 1:3 = 1, 2:1 = 1, 2:2 = 2, 2:3 = 2, 3:1 = 1, 3:2 = 2, 3:1 = 3

There are 5 combinations that result in 1, 3 combinations that result in two, and only one that
results in 3. The choice is then biased toward 1. About a 0.5 chance of 1, 0.3 of 2, and 0.1 of 1.
(Caught me! The numbers don't add up to 1.0. It's because they are really a little higher than
what I gave; 0.53, etc.)

Using SC code, we could bias a choice toward 0 and away from 10 using the min or max
function.
21.2 bias float

min(10.0.rand, 10.0.rand);

Two random floating point numbers are chosen between 0 and 10, but the lesser of the two is
returned. The result is that lower numbers will be more likely than higher numbers, since at least
half the combinations (any number and 0) will result in 0, but only one combination will result in
10: 10 and 10. Here is the graph for the probabilities of this random function:

What would be the probability graph for this code?


21.3 bias

max(100.0.rand, 100.0.rand);

A little tougher question; how would this code change the outcome (note that the second random
choice is not a typo, I use 100.rand to indicate an integer choice, not a floating point):
21.4 bias

max(200.0.rand, 100.rand);

141
How about this one (the possible results are 0 to 100, but ask yourself how many combinations
will result in 0? how many will result in 50? how many 100?):
21.5 bias

(100.rand + 100.rand)/2

How would you do an inverted triangle? How would you do a choice between -10 and 0 with a
bias toward -10? How would you do a choice between 0 and 10 with a bias toward 7.5? How
would you assign a percentage choice to each value, so that 60 has a 0.25 chance of being
chosen, 61 has 0.1, 62 has 0.5, etc.?

Complicated? It used to be. SC has a good collection of functions for choosing numbers. Before
we look at them I want to demonstrate a system for testing biased choices. The method I
typically use is to declare a counting array that can keep track of the random choices, then
increment the position of the array as the choices are made.
21.6 test bias

a = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; //Fill an array with 0s


//There is a better way to fill an array.
//This method is clearer.
1000.do( //do 1000 iterations
{
b = 10.rand; //pick a random number between 0 and 9
a.put(b, a.at(b) + 1); //increment that position
//in the array
}
);
a.postln; //print the results.

The above code picks random numbers and keeps track of them in the array "a." It does this with
a.put(). The random choice is between 0 and 9, and is stored in the variable b, then that position
of the array "a" is increased using the put message. The first argument for "put" is the position in
the array that you are changing, the second is the value. So I change the position chosen by b,
and I change it to a value that is one greater than what it previously was (a.at(b) +1). Last it
prints the results. Using this method you can test each of the random processes we've discussed
above, or the functions listed below.

With some of the biased random choices above a float is returned. In this case, you must convert
the value to an integer before using it as a reference to the array.
21.7 Text float bias

a = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]; //Fill an array with 0s


//There is a better way to fill an array.
//This method is clearer.
1000.do( //do 1000 iterations
{
//pick a random value, round it, and convert to integer
b = ((10.rand + 10.rand)/2).round(1.0).asInteger;
a.put(b, a.at(b) + 1); //increment that position
//in the array

142
}
);
a.postln; //print the results.

Here are the random functions, or messages provided with SC:

rand

Random number from zero up to the receiver, exclusive.

Example:

10.rand;

Test:
21.8 rand test

a = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
1000.do({
b = 10.rand;
a.put(b, a.at(b) + 1);
});
a.postln;

rand2

Random number from -this to +this.

Example:

10.rand2;

Test:
21.9 rand2 test

a = [0, 0, 0, 0, 0, 0, 0, 0, 0];
//I removed one slot to accommodate –4 to 4, including 0
"[ -4, -3, -2, -1, 0, 1, 2, 3, 4 ]".postln;
1000.do({
b = (4.rand2 + 4); // I have to add 4 because the .at will not take
//negative values
a.put(b, a.at(b) + 1);
});
a.postln;

linrand

Linearly distributed random number from zero to this.

Example:

143
10.linrand.postln;

Test:
21.10 linrand test

a = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
1000.do({
b = 10.linrand;
a.put(b, a.at(b) + 1);
});
a.postln;

bilinrand

Bilateral linearly distributed random number from -this to +this.

Example:

10.rand2;

Test:
21.11 bilinrand test

a = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
//enough slots for –4 to 4
1000.do({
b = (4.bilinrand + 4); // I have to add 4 because the
//array will not take negative values
a.put(b, a.at(b) + 1);
});
a.postln;

sum3rand

A random number from -this to +this that is the result of summing three uniform random
generators to yield a bell-like distribution. This was suggested by Larry Polansky as a poor man's
gaussian.

This one is harder to see and test. Read about it in Dodge, page 272 and 273.

windex

The last one allows you to pick a set of values given a matching set of probabilities. It returns the
index value given a list of probabilities that should total 1.0. (The program won't crash if your
weights don't total 1.0, you just won't get the results you expect.)

Here is how you can use it:

144
21.12 windex

pitches = [0, 1, 2, 3, 4, 5]; // an array of pitches


//probabilities, they don't have to be in order
weights = [0.2, 0.5, 0.1, 0.1, 0.05, 0.05];
pitches.at(windex(weights)).postln;

Here is a test:
21.13 windex test

a = [0, 0, 0, 0, 0];
w = [0.5, 0.2, 0.2, 0.08, 0.02];
1000.do({
b = windex(w); //
a.put(b, a.at(b) + 1);
});
a.postln;

145
22. SuperCollider Synthesis Basics

22 Assignment

a) Using the Total Control model enter pitch and duration values to reproduce the first four
bars of Bach's Invention in B-flat. Using it as a starting point, experiment with different
methods of mutation.

Up until now we really haven't spent much time on how SC actually generates sound. This
section deals more with computer assisted composition, so we are not as concerned with
instrument design. Most of the assignments focus on pitch choice, duration, amplitude, etc. The
character of the instrument used to realize these ideas is less relevant. As a matter of fact, patches
can be used for MIDI output. The instrument choice can then be done on an outboard
synthesizer. But MIDI is missing a critical component for much of the music I do; microtones.
Unless you use a complicated system of pitch bends (as is done in the notation package Lime),
MIDI won't support microtones. So I'll do a few more examples using some simple instruments
available inside SC, then we'll move to MIDI playback.

Most of the models I'm going to address from here on out will use sound, but don't really require
intimate knowledge of how SC works internally. You could finish the course without knowing
how the sounds are produced. Even so, I'm going to present some basics and let you dig into the
code for synthesis on your own if you'd like.

Self-Documentation

Most of what I've learned in SC has come through taking apart and reproducing existing patches
one piece at a time. In the electro-acoustic field, which changes so rapidly, its not as useful to
know a specific software or environment. Rather I think the most valuable skill I can teach is
how you can learn on your own. SC has a number of useful tools for learning, including self
documentation. We used this briefly in the form of command-h. There are two additional key
combinations that are very useful: command-j and command-y. Command-y will show all
Objects that understand a given message. Command-j opens the .sc file that contains that Object.

Most objects in SC have been documented in help files. You can read through these separately,
but you can also evoke them using command-h. To do this, select an object (remember an object
begins with caps) and hit command-h. Many other functions, such as abs(), min(), max(), etc.,
are documented12. In the documentation you can find out what the arguments are and thereby
deduce which values affect the output in what way. Take this simple sine oscillator:

12
You can create your own help files. To do this, create a file, enter or paste the information you want and save it
with a .help extension and put it in the help folder in the SC folder. For example, once I solve a problem I save that
solution in a file called "myhelp.help." Then I can open the file by typing "myhelp" in SC, highlighting the word,
and typing command-h.

146
22.1 Help files

Synth.scope(
{//line 2
SinOsc.ar(200, 0, 0.5)
}, //line 4
0.1 //line 5
);

Double click on Synth and hit command-h. The scope message for Synth takes four arguments;
ugenGraphFunc, duration, name, and bounds. The first two, ugenGraphFunc, and duration (size
of the scope window), are the only two we are using here. Lines 2-4 are the ugenGraphFunc, and
line 5 is the duration. All of line 3 is a SinOsc ugen. The .ar message for SinOsc takes four
arguments, of which 3 are being set here: frequency, phase, and mul. Run this code and try
changing all of the values to see how it affects the resulting sound and/or scope.

Here is another example:


22.2 More help

Synth.scope(
{ //line 2
SinOsc.ar(
LFNoise0.ar(8, 200, 600),
mul: 0.5)
}, //line 6
0.1
);

There are three objects in this code: Synth, SinOsc, and LFNoise. There are three messages that
require arguments: scope, ar, and kr. You should recognize the arguments for Synth: Everything
between the { and } are argument one; the ugenGraphFunc. The second argument, 0.01, is
duration.

Select SinOsc, and LFNoise0.kr, press command-h and you will see the arguments for each of
these objects.

SinOsc.ar(freq, phase, mul, add)

LFNoise0.ar(freq, mul, add)

The arguments for the ar message in SinOsc are frequency, phase, multiply, and add. For
LFNoise0 they are frequency, multiply, and add. (Earlier we used the message "kr." The kr
message is for control rates. Use this message when generating a control signal, it is more
efficient. Use ar when generating an audio signal.) Notice that the ar message uses different
arguments with each object. This is an important point that I missed when first learning SC. Each
object responds to a message differently, and requires different arguments. They have to be in
order, or you have to use the keyword argument. Refer back to the lines of code. The line with
LFNoise0 is the first argument of ar for SinOsc (the entire line of code), and mul: 0.5 is the
second argument. Normally the second argument is phase, so we have to use they keyword mul:
to make sure the argument is understood correctly.

147
The arguments for LFNoise0 are freq: 8, mul: 200, add: 600. If we used the first argument only
(LFNoise0.ar(8)), then LFNoise0 would producing 8 random values between -1 and 1 per second
with the center at 0. The mul and add arguments are used to offset and scale the output. Offset
means it will change the range or middle value. Scale means change the range of values. If the
add is changed to 600, then the middle value is no longer 0, but 600. So the output would be
random values between 601 and 599. The value "mul" scales the output to random values
between + and – 200. (1 and -1 multiplied by 200). Combined with the add the output is 200
above and 200 below 600. The final result is random values 8 times per second between 400 and
800.

As we said before, the output or return of the LFNoise0 line of code is in the position of the
frequency argument for SinOsc. In other words, the values for SinOsc will be random choices
between 400 and 600. From here out it's easy to understand what the results will be. The volume
of the SinOsc will be 0.5. In the Synth.scope all that will be used is the ugenGraphFunc
argument and 0.1 is the duration of the scope that appears on the screen.

So understanding instruments in SC is a matter of knowing what the arguments for a message


stand for, and understanding the output of each unit generator.

Here are a few more you can take apart. Look through the lines of code and answer these
questions:
-What are the messages, what are the objects, what are the arguments?
-What is the ugenGraphFunc?
-What are the arguments for Pan2.ar?
-What is the second argument for Synth.play?
-What does the second argument for Synth.play do?
-What is the freq argument for each SinOsc.ar?
-How does the mul and add component change the output of each unit generator?
-How would you find the documentation for LFPulse? LFSaw? Pan2?
22.3 SinOsc patch

(
Synth.play({
Pan2.ar(
SinOsc.ar(
SinOsc.ar(
SinOsc.ar(0.1, 0, 100, 101),
0,
120,
2000
),
0.1
),
SinOsc.kr(2)
)
}, 6)
)

(
Synth.scope({

148
SinOsc.ar(
LFPulse.ar(
SinOsc.ar(0.1, 4, 5),
LFSaw.ar(2, 0.4, 0.6),
SinOsc.ar(0.5, 100, 200),
900),
0.1
)

}, 0.1)
)

Here are the same examples using key words:


22.4 SinOsc with keywords

(
Synth.play({
Pan2.ar(
in: SinOsc.ar(
freq: SinOsc.ar(
freq: SinOsc.ar(0.1, 0, 100, 101),
phase: 0,
mul: 120,
add: 2000
),
mul: 0.1
),
pos: SinOsc.kr(2)
)
}, duration: 6)
)

(
Synth.scope({
SinOsc.ar(
freq: LFPulse.ar(
freq: SinOsc.ar(freq: 0.1, mul: 4, add: 5),
width: LFSaw.ar(freq: 2, mul: 0.4, add: 0.6),
mul: SinOsc.ar(freq: 0.5, mul: 100, add: 200),
add: 900),
mul: 0.1
)

}, duration: 0.1)
)

Environment Model

Now we are going to skip ahead a lot to allow you to experiment with the compositional aspect
of SC. To do this I'm going to give you a model that has a patch already built in. I don't expect
you to understand it completely. I don't want to completely discourage you from reading about
the other parts of the code, but I will put in bold those parts that are germane to this topic. The
first model is called an "environment" and uses a Pbind function to link different parameters

149
such as pitch, volume, next event time, and duration together in one environment that generates
events.

We'll call this the "blip" model because it uses an instrument called blip. It's a simple oscillator
that has a variable number of overtones. Below is the entire instrument. The things you might
want to change would be the parameters in the envelope, and the number of harmonics. For now
you might want to select items such as Env, Pan2, and Blip and open the help file.
22.5 blipInst

blipInst = { arg freq, amp, dur, pan;


var env1;
env1 = Env.perc(
0.001.rand, //attack
max(0.5, dur) //decay
);
Pan2.ar(
Blip.ar( //Blip instrument
freq, //taken from Pbind below
3.rand + 2, //number of harmonics
mul: EnvGen.kr(env1) * amp //envelope
),
pan //from Pbind
);
};

The section of code that puts all the parts of this model together is a Pbind. It melds a group of
functions together and passes those values to the blip instrument. Here it is.
22.6 Pbind

Pbind(
\dur, nextFunc,
\midinote, noteFunc,
\veloc, velocFunc*127,
\ugenFunc, blipInst,
\sustain, durFunc,
\pan, panFunc
).play;

Each of the arguments in the Pbind are functions described elsewhere in the code. They are
similar to the functions we have written before, the only difference is they are enclosed in a
Pfunc unit generator. Just so you know there's a better method, the author of SC has told me
several times it is better to use Pseries, but that is more complicated, so we are going to use
Pfunc for now. The two sections above you can ignore. Here is the entire patch with the items
you know how to change in bold. The point is for you to come up with functions that generate
values for pitch, next event, duration, volume, and pan position. (A "total control" method of
composition.) For this first assignment focus on pitch and next event, then expand to the others if
you have time.

One of the easiest functions to use is a biased random choice. The bias is provided by the array
content. Try modifying this patch to generate different biases; towards more tonal music,

150
towards faster or slower durations, or other types of scales or tuning systems, or random
frequencies. You could even use windex to give probabilities for each item in the array.

Notice that the durFunc simply sets duration to the length of the variable next. You should
distinguish between when the next event will occur and the duration of this event. The can be,
but don't have to be the same. Also realize that if "next" is 0 there will be a simultaneous event.
In this case, duration can not be 0. So dur should be the max of next and a minimum duration.
22.7 Biased random total control

(
var nextFunc, noteFunc, durFunc, velocFunc, panFunc,
blipInst, pitch, next,
count = 0, nextEvent = 0.5, dur = 0.5 , vol = 0.8;

pitch = [Enter MIDI Pitches];


next = [Enter duration or next values];

noteFunc = Pfunc({pitch.choose});
nextFunc = Pfunc({nextEvent = next.choose; nextEvent});
velocFunc = Pfunc({rrand(0.2, 0.7)});
durFunc = Pfunc({dur = nextEvent; max(dur, 0.3)});
panFunc = Pfunc({1.0.rand2});

//This is a simple instrument that you can fiddle with if


//you'd like, but you don't need to. Soon we'll be going
//to MIDI output, where instrument design is a moot point.

blipInst = { arg freq, amp, dur, pan;


var env1;
env1 = Env.perc(
0.001.rand, //attack
max(0.5, dur) //decay
);
Pan2.ar(
Blip.ar( //Blip instrument
freq, //taken from Pbind below
3.rand + 2, //number of harmonics
mul: EnvGen.kr(env1) * amp //envelope
),
pan //from Pbind
);
};

//You won't want to change what's below.

Pbind(
\dur, nextFunc,
\midinote, noteFunc,
\veloc, velocFunc*127,
\ugenFunc, blipInst,
\sustain, durFunc,
\pan, panFunc
).play;

)
//end of patch

151
Each of the values in the model above are chosen at random. The next step is to move through a
given set of values in a sequence. There are two methods you can use. Previously we used a
Sequencer.kr to rotate through values. But that is limiting. In this patch I would like to use a
counter and reference items in the array using the count. The counter increments with each event
(in the "next" function). It is then used in a wrapAt message to reference the items in the arrays
pitchPat and nextPat.
22.8 Total control model

(
var nextFunc, noteFunc, durFunc, velocFunc, panFunc,
blipInst, nextPat, pitchPat, midin,
count = 0, next = 0.5, dur = 0.5 , vol = 0.8;

pitchPat = [61, 62];


nextPat = [0.5, 0.5];

noteFunc = Pfunc({midin = pitchPat.wrapAt(count); midin});


nextFunc = Pfunc({next = nextPat.wrapAt(count);
count = count+1;
next});
velocFunc = Pfunc({vol = rrand(0.2, 0.7); vol});
durFunc = Pfunc({dur = next; max(dur, 0.2)});
panFunc = Pfunc({1.0.rand2});

//This is a simple instrument that you can fiddle with if


//you'd like, but you don't need to. Soon we'll be going
//to MIDI output, where instrument design is a moot point.

blipInst = { arg freq, amp, dur, pan;


var env1;
env1 = Env.perc(
0.001.rand, //attack
max(0.5, dur) //decay
);
Pan2.ar(
Blip.ar( //Blip instrument
freq, //taken from Pbind below
rrand(3, 12), //number of harmonics
mul: EnvGen.kr(env1) * amp //envelope
),
pan //from Pbind
);
};

//You won't want to change what's below.

Pbind(
\dur, nextFunc,
\midinote, noteFunc,
\veloc, velocFunc*127,
\ugenFunc, blipInst,
\sustain, durFunc,
\pan, panFunc
).play;

152
You will work with the lines in bold. The first thing you will want to do is enter pitch
information and duration information. For this exercise I want you to first try creating a passage
from one of Bach's inventions, in B-flat. Here is the original, modified slightly and transposed to
the key of C for simplicity:

Entering the pitch and next event values is another area where we get into the interface between
humans and machines. You could enter each midi pitch in its proper octave: [72, 60, 62, 64, etc.].
Or it may be easier to enter values between 0 and 12, then add 60: [12, 0, 2, 4, 2, 0, etc.] + 60. It
just depends on which would be easier to do. (And yes, there is an even easier method that we
will learn later. For now I want you to work with MIDI numbers.)

Using this add method may not seem to make much difference in the case of MIDI number, but I
find this sort of math very useful when entering durations. The default duration is one second, so
if you entered [1, 1, 2, 1, 1] in the duration array you would have two notes that last one second,
then one that will last 2 seconds, then two lasting one second. But that would be much to slow
for our example. I usually play this example at quarter note = 60 bps. So to get the timings right I
would have to enter [0.5, 0.25, 0.25, etc.]. There are two objections I have to this method. The
first is that it is laborious. The second is that it is difficult to change. What if we wanted to speed
the tempo up a little? We would have to change all the values to, e.g. [0.6, 0.3, 0.3, etc.]. What I
do instead is to enter a relative value and then divide the whole array by a single value that will
result in the correct tempo. I usually start with the smallest value and call it 1, then the rest are
entered in relation to that. If a sixteenth is 1, then an eighth is 2, a quarter is 4, and so on. Then I
determine what value needs to be used to divide each integer for the correct result. I want [2, 1,
1, etc.] to become [0.5, 0.25, 0.25, etc], so the value is 4; 2/4 = 0.5, 1/4 = 0.25, etc. So the final
array will look like this: [2, 1, 1, 1, 1, 2, etc.]/4. You should try this with a sample line of code if
you'd like to see that it does really work.
22.9 tempo

([1, 2, 3, 4, 5, 6, 7, 8]/4).post;

Now the "4" becomes a sort of metronome which I can use to set the tempo. If I enter [2, 1, 1,
etc.]/2, then the final results will be [1, 0.5, 0.5, etc.], or twice as slow. If I enter [2, 1, 1, etc.]/8
then the final results will be [0.25, 0.125, 0.125, etc.], or twice as fast. Then 4 is normal tempo,
higher numbers are faster, lower numbers are slower. You can even fine tune it by entering
floating point values such as 4.5.13

When you're done it should look something like this:

pitchPat = #[12, 0, 2, 4, etc.]+60;

13
Then Pbind environment has a built in method or symbol for changing tempo. I'm not in this patch.

153
nextPat = #[2, 1, 1, 1, 1, etc.]/4;

Next we have to parse out each of these values and pass them to the environment model, which
in turn passes them to the instrument blip to be played. The way I move through these values is
using a global variable "count" which is used to reference each array with a wrapAt message in
the two functions noteFunc and nextFunc.

var nextFunc, noteFunc, durFunc, velocFunc, panFunc,


blipInst, nextPat, pitchPat,
count = 0, next = 0.5, dur = 0.5 , vol = 0.8;
[etc.]

noteFunc = Pfunc({var midin; midin = pitchPat.wrapAt(count);


midin});
nextFunc = Pfunc({var next; next = nextPat.wrapAt(count);
count = count+1; next});

You can see that count is declared as a variable and initialized to 0. In the function noteFunc I
use count (now at 0) to reference the values in pitchPat and give midin the value at "count", or 0.
I do the same thing with next in the nextFunc. Also, after storing the value in next, I increment
count so that the next time these two functions are run they will yield values in pitchPat and
nextPat at 1, then 2, then 3, using a wrapAt to make sure I don't exceed the boundaries of the
array. The last statements in both of the functions are simply the variables midin, and next. This
is called the "return." A function returns the results of the last line of code. Sometimes its an
expression such as next.at(count).midicps;. In this case it's just a variable.

After entering all the values you should be able to play this "ice cream truck" version of Bach's
famous invention.

What is the point? The point is that now you can easily and quickly try modifications. What
would happen if you doubled all the midi values ([array]+60*2)? What if you halved them
([array]/2+60)? What if you referenced the array backwards (wrapAt(count.neg))? or referenced
every other item in the array, or every fifth item (wrapAt(count*5)?

154
23. The Aesthetics of Computer Music

23 Assignment

Mutate the Bach invention. Devise the method first, looking for ways of changing the file
where you aren't sure what the results will be.

a) Phase shift: Both the pitch and next arrays have the same number of elements, such that
they both repeat at the same rate. What would happen if you removed one item from the
pitch content, or the next content, so that they contain different amounts and each repetition
will move them farther out of phase?
b) Phase shift reduction: Could you continue this logic, making the number of elements in
the pitch array smaller and smaller, while the next pattern stays the same? (Use %?)
c) Randomize: Could you maintain the collection of next values but choose them at random?
Would the result be anything close to the original?
d) Multiples: Could you use count at multiples to access the arrays? For example increment
count as you normally would so that it moves through 1, 2, 3, 4, etc., but in the atWrap use
count*2 so that the next choices will be 2, 4, 6, 8, etc. (1*2, 2*2, 3*2, etc).
e) Coin: Could you insert an if(0.7.coin etc. statement so that 70% of the time a value from
the pitch or next array is chosen and 30% of the time a random value is chosen?
f) if: Are there other ways of using an if() statement to affect the outcome?
g) Increasing multiples: Could you access either or both next and pitch at increasing
multiples? For example, the first 20 choices will be multiples of 1; 1, 2, 3, 4, etc., then
multiples of 2; 2, 4, 6, 8, etc., then 3s; 3, 6, 9, etc.
h) Extreme tempo: Could you increase the tempo such that a melody is no longer perceived?
Is there a point where you can still perceive pitches but no melody, as in a pandiatonic
system?
I) Pandiatonicism: Could you do a rapid fire random choice or multiple choice, such that the
melody is obscured. Would you begin to hear some pitches as being dominant, such that a
single chord emerges?
j) Substitution: Could you maintain the pitch or next structure, but substitute other values.
For example; every time you hear a C4 in the original you would hear an Eb3, every E4
would be an F#5?
k) Multiple arrays: Could you construct two or three pitch arrays and then choose the pitches
randomly between the three patterns, or move back and forth between the patterns? Could
you do it in such a way that we still hear the original? Could you choose the pitch arrays
from three different Bach works, or one from Bach, one from Wagner, and one from
Webern?

Mutation

There are two principles of computer aided composition that the previous example brings to
light. The first is mutation (or transcription, commuting, transformation): using an existing
pattern or musical work mixed with new material. A good example of mutation is a famous and
powerful reworking of one of Bach's chorales; "Come Sweet Death." It is a simple, elegant

155
system and the results are stunning. The choir sings all the pitches in the original choir with one
new rule; you move to the next pitch when you (personally) run out of breath.

The power of a cpu becomes evident when you imagine the amount of work and hassle it would
be to halve all the intervals in a chorale, copy the results out, and then try to get a choir to sing it
accurately. (I don't mean to imply that live performance with real musicians is a thing of the past.
To the contrary I think one of the advantages of computer composition is using the output of the
system as a rehearsal aid for musicians. This puts very difficult music, such as microtonal works,
within reach of performers.) My point is that the cpu is a very accurate and obedient performer,
on whom you can try unusual ideas. Not only will it do exactly what you ask, but also it won't
interpolate, complain, or question your compositional integrity. Take the exercise we just did
(the Bach invention) for an example. If we wanted to halve all those intervals using SC, or
mutate them into microtones, it would only be a matter of adding two characters (/2) in the
correct spot and running the file again. A lot less work and a lot more accurate than copying it
out and giving it to a performer.

Escaping Human Bias

The second principle is the ability to escape human bias. Settle in because this is a bit of a soap
box subject for me: intuitive vs. systematic composition.

Two years ago I worked for a production company writing children's songs. The process was
this: The producers came to me with an idea about a tune, described how they wanted it to sound,
then I wrote out intuitively what they described and what I imagined. I begin the discussion with
this example to demonstrate that intuitive composition is not wrong. In a case like this it is
essential, it's just not very interesting to me now. I was bored with these pieces the minute they
hit the paper. They were completely predictable and common; exactly what the boss wanted.
Many composers work this way: They imagine a musical event, then describe the circumstances
that will result in that event, give the description to the performers, then make adjustments to the
description and the performer's understanding until the event imagined is realized. I like to think
of this as backward composition. You are working back to something you imagined. I also
believe it is more of a craft than an art. You are reproducing something that already exists
conceptually in your mind14.

When an intuitive composer looks at a blank page, or the end of an unfinished work, and tries to
imagine the next event, the event that finally comes to him is not a new idea. It is rather
something dug up out of the collection of existing ideas from the composer's collective
experiences in music study and performing. (It's probably closer to the truth to say that
composers work with a mixture of system and intuition. I'm separating them in this discussion
for effect.)

14
David Cope's book Virtual Music draws into question whether this style of composition is true creativity or mere
craft, since it can be reproduced mechanically.

156
The systematic composer works the opposite way. Instead of imagining an event then describing
the circumstances required for that event, she describes a set of circumstances and then discovers
the event that results. In my mind this is what constitutes forward composition. (The difference is
political, as Herbert Brün would say.) And I believe all the true innovators worked this way.
Instead of asking "I wonder how I can get this sound" they say "I wonder what would sound
would result if I did this." I'm only paraphrasing here, but I recently read in an electro-acoustic
text that the composers working on the score for Forbidden Planet operated this way. They were
asked how they planned out their ideas and how they worked toward the ideas they had
envisioned. The answer was something like "we didn't have anything in mind, we just
experimented with a patch and sat back marveling at what came out."

An intuitive composer is relying on personal bias for ideas. But the results are often those
familiar piano works with four note chords in the right hand and octaves in the left hand. Why
use octaves in the left hand? Because everything you've played has octaves in the left hand. You
have been biased toward that technique. If you get tired of octaves in the left hand then you try to
escape that bias. How do you escape that bias? You either use a method of iteration (like trying
every other interval in the left hand until one strikes you as useful) or you use a random system
(plunking at the piano for new ideas). Both are systems. The truth is, even intuitive composers
work with systems when they get stuck for ideas. I've come to conclude that there are two types
of composers; those who use a system and don't know it, and those who understand the system
they are using. Everyone uses a system.

When you are learning to compose you rely on instructors and the existing body of literature for
new ideas. When you hear a new style it is usually a surprise and unexpected. You listen to it for
a while and then become accustomed to it, you begin to like it, and it is infused into your own
style; it has become one of your biases. But what happens when you have picked the brain of all
of your instructors? Where do you go after you have studied all existing styles? How do you
move on to ideas so new that they would surprise your instructors and you?

You use a system. Brün has said, in reference to this method of composition, "my next piece,
which I have not yet learned to like."

Ok, enough preaching. How do you escape your own bias, and how can a cpu help? A computer
has two qualities that are useful in this regard: They are ignorant, and thorough. They will do
exactly what you say, only what you say, and they will do every possible iteration of what you
say.

Ignorant iteration

A computer will not interpolate your instructions. A good example of this is a random walk. If
you ask a musician to play random music they will avoid any patterns that musicians have come
to recognize as musical. If you ask a computer to generate random pitches it may very well come
up with the pitches (in order) for "Somewhere Over the Rainbow." We would not consider that a
random set of pitches, but it is indeed a possible outcome of a random walk. The computer has
no bias of what random music should be, and tonal melodies are included in its definition of
possible choices. It has no bias.

157
There are two practical results from this lack of bias. The first is that you can test theories
without human interpolation. By this I mean that if your set of instructions to a computer result
in something you didn't expect, then the fault is not with the performer, it is with the instructions.
The computer is never wrong. You are. So a CPU can be used to test a theory of composition.

The other benefit is that the results may surprise you, and they may be a pleasant surprise that
you can use compositionally. I remember reading a sci-fi short story about a child who showed
promise as a musician at an early age. The "big brother" of this story decided that he should be
intentionally sequestered in a room filled with fancy synthesizers and instruments so that he
would grow up without any bias of previous works. It was an experiment of sorts, to see what
would come of a composer with no outside influence. That scenario is a reality with artificial
intelligence. The charm, for me, in many of the processes I set up is that I can't predict the
outcome. In a way you are experimenting with a performer who has been sequestered all his life
and will do only what you ask.

Thorough iteration

A computer will try combinations that we may dismiss because of our bias. The example I like to
use to demonstrate this is a thorough iteration of the five words "I dig you don't work." If a group
of students were asked to come up with every logical combination of those words you might end
up with 30 or so that made sense. But a computer will do every combination (121?). The
composer, or the one who described the system to the cpu, is forced to consider each variation
valid, since it is true to the design or system (every possible iteration). You will therefor
encounter combinations you might have ignored but that make sense once you are faced with
them on the page. For example, did you think of repeating one of the words? How about "Don't
work! Don't work! Don't!" My instructions did not include a clause banning repetition. So I have
to accept that variation as valid or change the instructions. If it generates combinations I didn't
intend, as described above, then the fault is with my system, and I need to refine the system by
changing my instructions. Either way, I've learned more about the work and myself than I would
have if I had relied on my bias, or a performer's bias.

Below is the SC version of this exercise. This is an exhaustive iteration with no repeated words.
How many make sense? Nearly all except those where you and I are consecutive.
23.1 I Dig You Don't Work

var text;
text = ["I", "DON'T", "DIG", "YOU", "WORK"];
121.do({arg i; i.post; text.permute(i).postln;})

158
24. Total Control, Serialization, MIDI Out

24 Assignment

a) Using the Total Control model, create patches that walk through the values in each of the
arrays (pitch, next, duration, amplitude) in a total control system of composition using the
methods described below.

Next we will expand the model above to include most aspects of serial composition, or total
control. Total control is a system of composition where all dimensions are controlled by a series
or row. These items can include pitch, next event, event duration, articulation, octave,
instrument, and amplitude or velocity.

Each of these parameters require special consideration during serialization, so I'll treat each one
separately in the next section. In this section I'll describe general principles of serialization using
SC.

Lines 1-47 below represent a model similar to the one in the last section. I've added arrays that
can be used for amplitude and duration. I've added a midiInst that will send midi signals to an
external unit. I've removed pan because it is no longer useful with MIDI output.

Lines 2-6 are variable declaration and assignment. Line 4 relates to MIDI instrument. Port should
normally be 0, but it will change with each setup. In our lab I think this should be set to 4. (When
SC first starts up you will see the ports listed. Choose the one that makes sense, or experiment
until it plays back on the equipment you want to use.) The channel will depend on what you want
to do with the setup. This should also normally be 0. The program can be changed depending on
which instrument you want to use. Set this to 0 for a standard piano sound.

In addition to the variables for each serialized element (next, dur, vol, midin), there are separate
counters for each function; pCount, dCount, aCount, and nCount

Lines 16 - 19 are sets of arrays filled with values used by the Pbind below. With pitchPat I add
60 to bring it into the C4 range. Next pattern represents the time of next event. I divide by 4 so
that 1 = 1/4, 2 = 1/2, etc. The amp pattern is multiplied by 0.1 to bring values into 0 to 1.0.
Duration pattern is similar to next, but note that you can set duration and next event separately,
such that one event may remain sounding while another begins. A value of 0 for next will result
in two (or more) notes sounding at once. With each of these arrays you can enter numbers using
a different system as I have. For example the pitchPat could be 45, 56, 75, etc. and amp pat could
be 0.4, 0.3, etc. I choose this method for simplicity. (Once again, the user interface.) You can
also use some other method for generating values such as a mathematical series or proportional
values. Next and duration are in seconds and can be any value (including 0 for a chord). Amp
can be any value between 0 and 1.0. In the midiInst these values are converted to velocity
between 0 and 127. Midin is a midi note (usually between 12 and 120).

Lines 21-25 are the functions. In this model I use a simple process: use the value pCount,
nCount, vCount, and dCount to reference the pitch, note, velocity or volume, and duration arrays

159
using wrapAt to stay within the array. You can expand each one of these using if() statements to
modify the array or to change the way each count is incremented.

If you want to move backward in the arrays use the count value combined with the .neg. This
will make the increment negative (e.g. -10, -11, -12, -13), and the wrapAt will keep the values in
the array.

I've added a totalDuration variable that is used by the Pbind for a total duration. The playback
will last this long (in seconds). If you want the patch to continue until you stop using command-
period, then remove the argument and the parentheses; instead of play(duration: totalDuration);
use just play;.

There is also a random seed (rSeed) and a random seed function. If rSeed is left at 0 then a rSeed
is set to whatever the system chooses and that value is printed for future reference. If the rSeed is
set to anything other than 0, then that value is used. For total control methods the random seed
does not come into play. But if you use random or filtered random elements in the choices then
the rSeed can be used to repeat a particular performance.
24.1 MIDI Total Control

( //line 1
var nextFunc, noteFunc, durFunc, volFunc, panFunc,
blipInst, midiInst, nextPat, pitchPat, volPat, durPat,
channel = 0, port = 0, prog = 0, pCount = 0, nCount = 0, vCount = 0,
dCount = 0, next = 0.5, dur = 0.5 , vol = 0.8, midin = 60,
rSeed = 0, totalDuration = 180;

if(rSeed != 0, //line 8
{thisThread.randSeed = rSeed},
//I have to do a 0.rand to get it to pick its own seed, which I use later.
{0.rand; rSeed = thisThread.randSeed}
); //line 12

rSeed.postln;

pitchPat = [12, 0, 2, 4, 2, 0, 7, 4, 12, 7, 4] + 60;


nextPat = [4, 2, 1, 3]/4;
volPat = [2, 2, 3, 4, 5, 4, 7, 8, 9, 5, 7, 3, 1]*0.1;
durPat = [4, 1, 2, 4, 3]/4;
//line 20
noteFunc = Pfunc({midin = pitchPat.wrapAt(pCount);
pCount = pCount + 1; midin});
nextFunc = Pfunc({next = nextPat.wrapAt(nCount); nCount = nCount + 1; next});
volFunc = Pfunc({vol = volPat.wrapAt(vCount); vCount = vCount + 1; vol});
durFunc = Pfunc({dur = durPat.wrapAt(dCount); dCount = dCount + 1; dur});
//line 26
blipInst = { arg freq, amp, pan; var env1;
env1 = Env.perc(0.001.rand, max(0.5, dur) );
Blip.ar(freq, 3.rand + 2, mul: EnvGen.kr(env1) * amp)};

midiInst = { arg midinote, amp, sustain, outersynth;


MIDIOut(port).noteOn(channel, midinote, amp*127);
outersynth.sched(sustain, {MIDIOut(port).noteOn(channel, midinote, 0);
}); //line 34

160
nil };

Pbind(//line 37
\dur, nextFunc,
\midinote, noteFunc,
\amp, volFunc,
\ugenFunc, midiInst,
\sustain, durFunc
).play(duration: totalDuration);
//line 44
127.do({arg item; MIDIOut(port).noteOn(prog, item, 0)})

)//line 47

BlipInst is a simple instrument that does not require MIDI. MidiInst works the same way but
sends signals to MIDI equipment. Lines 37-45 are where they all come together. Pbind generates
events and plays them. It first runs the code for each of the functions then uses the values that
result to build an instrument and play an event. Line 45 is necessary because if you are using
MIDI and stop the process in the middle some notes will be left hanging without a note off
command (normally supplied by line 33). So this is a quick 127.do that sends a note off to all
midi notes.

Serialization: Moving to new values

There are a number of ways you can serialize your choices. I'll cover each one, but realize that
you can use them in any combination. The most obvious is moving forward using a limited
parallel pattern. A limited parallel pattern means all of the value arrays advance through the array
a limited number of times moving the same direction at the same rate. Most traditional music
works this way. The musical score works as the "functions" for each "next" value. The score has
symbols to represent all parameters of musical expression and all the symbols are read from left
to right and are performed a limited number of times. (A non-parallel system in traditional
notation might be an aleatoric set that can be read forward or backward.) For each of these
examples I'll only use pitch and next event arrays to demonstrate. Let's use these two sets:

pitchPat = [1, 2, 3, 4];


nextPat = [2, 1, 1, 3, 1];

A forward limited parallel pattern would advance through pitchPat thus: 1, 2, 3, 4 and the
nextPat thus: 2, 1, 1, 3, 1. The immediately evident inconsistency is that one array has four
values while the other has five. If we pass through the array only once we will either have to
repeat a pitch value or leave a next value off. One solution would be to make sure that all of the
arrays have a matching number of values. If this were the case we wouldn't be doing anything
beyond what a traditional music manuscript would do (or copy program—and, may I add, what
most other composers are doing), so I don't see much point in that system. Even so, to be
thorough, I'll cover this option.

You could stop sending values after count reached a certain number, or you could calculate how
much time in seconds the piece would be and then schedule the synth part of the code to stop at
that time, but the cleanest method is to use Pseq instead of Pfunc (described in the next chapter).

161
The second method for a limited number of events is to tell the synth to stop after a certain
amount of time. It is done in the play message:

).play(duration: 20);

You could also declare another variable, say, totalTime, and assign a value to that variable in
seconds. Then in the play message at the end include that variable as the duration argument:

var totalTime;
//more code
totalTime = 60 + 120.rand;
//continue code
//last line:
).play(duration: totalTime)

The next method of moving to new values would be an unlimited parallel pattern. This is
actually easier than the limited pattern because you don't have to worry about when and how to
stop the process. You just stop it using command-period or scheduling the synth to stop using the
duration argument as shown above.

Another method moves forward through the values but at an independent rate. There are two
ways you can reference values at independent rates; using a single counting variable or using
multiple counting variables. Using a single variable would require some type of math operation
when the variable is used as an index. The example below uses a single counting variable at
different multiples.

noteFunc = Pfunc({midin = pitchPat.wrapAt(count); midin});


nextFunc = Pfunc({next = nextPat.wrapAt(count*2); next});
ampFunc = Pfunc({vol = ampPat.wrapAt(count*3); vol});
durFunc = Pfunc({dur = durPat.wrapAt(count*4); count = count + 1; dur});

Each pattern will rotate through the arrays at different rates. You could also us a second variable
to represent an addition or multiple expression.

var count = 0, multValue = 2;

noteFunc = Pfunc({midin = pitchPat.wrapAt(count); midin});


nextFunc = Pfunc({next = nextPat.wrapAt(count*multValue); next});
ampFunc = Pfunc({vol = ampPat.wrapAt(count*multValue);
multValue = multValue + 1; vol});
durFunc = Pfunc({dur = durPat.wrapAt(count*4); count = count + 1; dur});

Or you could use separate variables.

var pCount = 0, nCount = 0;

noteFunc = Pfunc({midin = pitchPat.wrapAt(pCount);


pCount = pCount + 1; midin});

162
nextFunc = Pfunc({next = nextPat.wrapAt(nCount);
nCount = nCount + 3; next});

The next method for moving to a new value is to reverse the direction (move backward through
the array instead of forward). You could change the way each count is incremented, but I think it
makes more sense to change the array at certain intervals. This method allows you more options,
such as inverting the array, halving the values in the array, scrambling the array, etc.

var pCount = 0, nCount = 0;

noteFunc = Pfunc({midin = pitchPat.wrapAt(pCount);


if(count1%100 == 99,
{pitchPat = pitchPat.reverse}
);
pCount = pCount + 1; midin});
nextFunc = Pfunc({next = nextPat.wrapAt(nCount);
nCount = nCount + 3; next});

Ladders, Boundaries

All of these have been variations of motion in a single direction. A ladder represents a type of
motion that can move either forward or backward for each step along the array (or up and down
the ladder). In SC, the easiest way to do a ladder is with a separate counter and a conditional
increment of 1 or decrement of -1 using "coin" as a condition. (There are variations to this
process; you can have 0 as a possible increment, which would result in repeating the value, or
you can increment by values other than 1 and -1). This is shown in the example below. This line
will return true 50% of the time and false 50% of the time. (You can change that number, e.g.
0.7.coin will return true 70% of the time.) If true, then pCount = pCount+ 1, if false, then
pCount = pCount - 1.

pCount = 0; if(0.5.coin, {pCount = pCount + 1}, {pCount = pCount -1});

Here is the ladder in a coded example.

var count1 = 0, ladderCount = 0, count3 = 0;

noteFunc = Pfunc({midin = pitchPat.wrapAt(abs(count1));


if(0.5.coin,
{count = count + 1},
{count = count - 1}
);
midin});
nextFunc = Pfunc({next = nextPat.wrapAt(count2);
count2 = count2 + 3; next});

If we use wrapAt and the value exceeds the array either by going too high or too low then it
"wraps around" to the lower end of the range.

163
One other method would be to use a hard boundary or a soft boundary. A hard boundary means
you simply don't allow the value to cross the boundary. If, for example, your boundary is 10 and
the walk up and down the ladder brings you to 10, then the next choice must be 9. So before
running the increment/decrement function you would first check the value to see if it is either 10
or 0. If it is 10, decrement the value automatically, if 0, increment. The problem with this
solution is that values tend to be attracted to the boundary much like a fly to a window.

A more elegant solution is a soft boundary: The closer the value gets to the boundary the greater
the chance that a value in the opposite direction is chosen. If there are ten elements in the array
then the closer you approach 0, the greater the chance that a positive value is chosen. As you
approach 10, the chance of a negative value should be greater. This can be expressed as the
current choice divided by the total number of choices. For 0, then 0/10, or no chance that a lesser
value is chosen. For 10, then 10/10, or 100% change that a negative value is chosen. The code
for this process is shown below.

[This makes values hover around the middle. So I'm waiting for some math help from a friend to
get a roll off.]

So far we've looked at forward motion, and backward motion by increasing or decreasing count.
Another system for advancing to the next value is something we've already done a lot: choosing
random values. Up until now I think we've mostly chosen just random values, but now you can
use the random choice to reference a specific set of values. The difference is that you can use the
array to specify each value rather than just a range. For example, just using a rand statement
would give you either integers between e.g. 0 and 10, or floating point values between 0.0 and
10.0, while a referenced array could give you discrete specific values:

nextPat = [1, 1.5, 2.75, 0.133];


next = nextPat.at(4.rand);

The message windex (weighted index) can be used to return a range of indexes with weighted
probabilities. In the lines below windex will return zero 50% of the time, one 10% of the time,
two 10%, and three 30% of the time. When used as an index reference the actual values for next
will be 1: 50%, 1.5: 10%, 2.75: 10%, 0.1333: 30%.

nextPat = [1, 1.5, 2.75, 0.1333];


next = nextPat.at(windex([0.5, 0.1, 0.1, 0.3]));

Versions of the series or array

All of the examples above use a counter as the pointer to new values. In each of these examples
the array stays the same and the counter changes. The method that is more closely akin to true
total control would be to have the counter always move forward at a consistent rate (1) but
modify the arrays to generate different variations such as prime, retrograde, inversion, and
retrograde-inversion. In this case we can move back to a single count which is incremented in the
last function of our model.

164
The difference will be in the functions. In each function you will use a conditional if() statement
to modify the array. One of the methods for modifying involves a nested array, or an array of
arrays. Consider this code:

a = [[0, 1, 2], [5, 3, 6], [88, 34, 76]];

This is an array of arrays. [0, 1, 2] is an array, [5, 3, 6] is an array, and [88, 34, 76] is an array.
All three of them are enclosed in an array. This is more confusing than if I had just listed all the
elements in a single array; [0, 1, 2, 5, 3, 6, 88, 34, 76], but it is useful if you wanted the numbers
to be used in groups. Remember that an array understands these messages: .rand, .scramble,
.reverse, and .choose. So [0, 1, 2].rand will return 0, 1, or 2. So what would the code below
return?
24.2 choosing versions

a = [[0, 1, 2], [5, 3, 6], [88, 34, 76]];


b = a.choose;
b.postln;

The first line stores the multidimensional array in the variable "a", then the second line uses the
message .choose and the array "a", which is an array of arrays, to choose one of the three arrays
at random and store that array in the variable "b." The variable "b" then has an equal chance of
being [0, 1, 2], or [5, 3, 6], or [88, 34, 76].

Instead of putting the actual values in the array we could load up a set of arrays and store them in
the variable "a."
24.3 choosing arrays

a = [0, 1, 2];
b = [5, 3, 6];
c = [88, 34, 76]];
d = [a, b, c].choose;

This does pretty much the same thing but using variables. "d" will end up being one of the arrays
stored in "a", "b", or "c. " The lines below show a more concise yet more complicated version.
Can you predict the outcome of these lines?
24.4 reverse, scramble, transpose

a = [1, 2, 3, 4];
b = [a.reverse, a.scramble, a + 10.rand].choose;
b.postln;

In each of the examples above I store the modified array in a new variable (that is, b = [].choose
rather than a = [].choose). This is important if you want to preserve the original array.

We can then use this logic in our Total Control model.


24.5 noteFunc

noteFunc = Pfunc(

165
{
midin = pitchPat.wrapAt(count);
if(count%20 == 19,
{ //Every 20th time modifiy the array
pitchPat = [
pitchPat.reverse,
pitchPat.scramble,
(pitchPat + 12.rand)%12
].choose
}
);
midin
}
);

The only other thing I think I need to address before working on the assignment is when to
change to a new version of the row. You could choose a rather arbitrary number as I have above;
20, or you could use a meaningful number, such as the number of elements in the array. This is
more consistent with the total control style. The dumb method is to count the elements in the
array and enter that number. If there are 10 items in the array (as humans count), then count%10
= 9 should work. But what if your functions included the possibility of changing the size of the
array? Or what if you want to try different versions of an array manually (i.e. enter different
passages from the literature). You would have to keep changing the number in the count%20 to
keep the system on track. The slick method is to use the message .size, which returns the size of
the array (as humans count). Here is the code.

if(count%(pitchPat.size) == pitchPat.size - 1

You now have all the necessary information for working on a Total Control composition.

166
25. Total Control Continued, Serialization using Pbind, Pseq, Prand

25 Assignment

Pfunc, Pseq, Prand

The slickest method for achieving classic serialism (where versions of the array are quoted in
their entirety before moving to a new version of the array) would be to use Pseq instead of Pfunc.
Pseq sends a series of values in an array from beginning to end. It also has an argument that
allows you to set the number of repeats. If the repeat argument is blank it will run one time. Here
is a prototype:

Pseq([array of values], repeatValue);

You could use Pseq to replace all of the Pfuncs, but if any of the streams uses a Pseq it will affect
the others. That is to say you could use Pfunc for everything but pitch, and use Pseq for pitch.
When the pitch sequence ends, the series ends. The lines below show this example. Nothing has
changed except noteFunc, where Pseq is used instead of Pfunc. The pitchPat is used for the array
and 3 is the number of repeats.

pitchPat = [1, 2, 3, 4];


nextPat = [2, 1, 1, 3, 1];
ampPat = [2, 2, 3, 4, 5, 4, 7, 8, 9, 5, 7, 3, 1]*0.1;
durPat = [4, 1]/4;

noteFunc = Pseq(pitchPat, 3);


nextFunc = Pfunc({next = nextPat.wrapAt(count); next});
ampFunc = Pfunc({vol = ampPat.wrapAt(count); vol});
durFunc = Pfunc({dur = durPat.wrapAt(count); count = count + 1; dur});

It is possible to "nest" several Pseq calls to build more complicated patterns. Here is an example.

Pseq([Pseq([array], repeats), Pseq([array], repeats)], repeats);

The result will be one single pattern. Why not just use a single array? Because a series of Pseq
calls allows you to modify a single array using .reverse, .scramble, etc. Below is an example of a
nested sequence. This entire pattern will be repeated three times (line 6). Each nested sequence
will be repeated the number of times indicated (line 2 two times, line 3 one time, line 4 three
times). If the original pitchPat is [1, 2, 3, 4], then this entire noteFunc would be 1, 2, 3, 4; 1, 2, 3,
4; 4, 3, 2, 1; 4, 3, 2, 1; 1.5, 3, 4.5, 6; 1.5, 3, 4.5, 6, etc., repeated 3 times.

noteFunc = Pseq([ //line 1


Pseq(pitchPat, 2),
Pseq(pitchPat.reverse, 1),
Pseq(pitchPat*1.5, 3)

167
], //line 5
3); // line 6

The Pbind, which is generating and playing the events will stop as soon as this pattern ends (at
the end of the sequence a "nil" value is sent to Pbind which is the signal for it to stop).

In classic 12-tone technique there are four versions of the row: original, retrograde, inversion,
and inverted retrograde. Each of these can easily be expressed as a modified array.
25.1 Original, Retrograde, Inversion, Inverted-Retrograde

//Schoenberg Suite Op. 2515


a = [0, 1, 3, 9, 2, 11, 4, 10, 7, 8, 5, 6];

a.postln; //original
a.reverse.postln; //retrograde
((12 - a)%12).postln; //inversion
((12 - a)%12).reverse.postln;// retrograde inversion

In addition to the four versions of the row you can choose between any one of the 12
transpositions. P-0, P-1, P-2, etc. Transposing the row is pretty simple in SC.
25.2 Original, Retrograde, Inversion, Inverted-Retrograde

//Schoenberg Suite Op. 25


a = [0, 1, 3, 9, 2, 11, 4, 10, 7, 8, 5, 6];

a.postln; //prime
(a + 4)%12; //prime-4
((a.reverse + 4)%12).postln; //inversion-416

You can generate the four versions of the row and store each in a variable, then choose from
those variables, or do the math inside the Pseq. Either would work. It's clearer to me to have each
version stored in a variable.
25.3 12-tone Pbind
var prime, retro, inver, inverretro;

prime = [ 0, 1, 3, 9, 2, 11, 4, 10, 7, 8, 5, 6 ];


retro = prime.reverse;
inver = (12 - prime)%12;
inverretro = inver.reverse;

Pbind(

15
The original version begins with E. For the inversions to work correctly these arrays must always begin with 0. So
to be precise you might want to add 4 to all calculations.
16
Actually I believe this is incorrect. It would actually be I-12 in a classic 12-tone matrix. But suffice it to say, for
our experiments, that it is one of the inverted transpositions.

168
\dur, 0.12,
\midinote,
Pseq([
Prand(
[Pseq((prime + 12.rand)%12),
Pseq((retro + 12.rand)%12),
Pseq((inver + 12.rand)%12),
Pseq((inverretro + 12.rand)%12)] + 60)
], 12)
).play

//end patch

Earlier we discussed the difference between a random walk and music that sounds random. The
12-tone method has as a goal sounding non-tonal (or some would say random). While a true
random walk might result in patterns that don't seem random. Compare the results of the patch
above with this patch. Which is more random and which sounds random. Remember, random is a
matter of perception.
25.4 Random walk?

Pbind(
\dur, 0.12,
\midinote, Pfunc({rrand(60, 72)})
).play

//end patch

There is only one additional consideration for pitch before we add duration; that of octave. The
octave choice should be made separate in a system like this to preserve pitch content. In the
version above all midi pitches are transposed to the 60-72 note range by adding 60. Different
octaves can be added using different numbers: 34-C2, 48-C3, 60-C4, 72-C5, 84-C6.

All other aspects of music can be serialized using this technique. It is even possible to serialize
instrument choice. In the case of MIDI you simply send the pitch information to a different
channel. If you are using your own instruments you can choose those using a Pseq, or Prand in
the ugenFunc section of the Pbind.

This raises the question of how information is shared in the environment. You may for example
have design four instruments, each requiring a pitch, a duration, an amplitude, and possibly
serialized articulations, for each event. How are those values, which are being serialized and
chosen by other components of the Pbind, shared between instruments? They are declared as
arguments in the instruments. Here is a simple musical example.
25.5 Sharing values in the environment

i = {arg myfreq;
SinOsc.ar(myfreq, mul: EnvGen.kr(Env.perc(0, 0.4)))};

Pbind(
\ugenFunc, i,
\myfreq, Pfunc({rrand(200, 1000)})
).play

169
In the interest of time, however, I will use a series of rhythmic patterns (next event) that are not
serialized, but modular, taken from one of Schoenbergs works (remember 0 means a
simultaneous event) and a single duration. I will use the model from the previous chapter.

170
26. Total Control Continued, Special Considerations

26 Assignment

Absolute vs. Proportional Values, Rhythmic Inversion

One of the options in serialization is to choose either absolute or proportional values. An


absolute value will always be the same, for example, an absolute C4 will always result in a C4. A
proportional value is calculated based on a formula and the previous value. It will use the same
proportion, but return different final values. An example of a proportional value is the interval of
a fifth. In this case the actual value returned may be a C4 or D-flat3, but it will always be a fifth
above the previous value. Proportional values work for pitch, next event, duration, and
amplitude. It is especially useful in the case of amplitude, allowing the serialization of gradual
changes between volumes (i.e. crescendo and decrescendo).

The danger of exceeding your boundaries is greater when using proportional values. Since you
can't specify absolute pitches within a range you are at the mercy of the system, and it will
regularly go out of range. The solution is to use a buffer, or wrap around as in the section above.
Note that wrapAt is of no help here since we are not using any kind of array for the actual values.

With most parameters a proportional value would be a floating point number. Proportional
choices between 0.0 and 1.0 will result in a new value that is smaller than the current value. (For
example given a duration of 2 seconds and a "next" value of 0.75, the final duration will be 1.5.)
If the proportional choice is above 1.0 then the resulting value will be greater. (Given a duration
of 2 seconds as a current value and a proportional choice of 1.25, the next value will be 2.5.)

Pitch

There are two important things to consider when using a proportional pitch scheme. If you are
working with frequencies you would express the intervals as fractions of the current value. For
example, if you are currently playing an A 440 and you want a proportional value of an octave
for the next value, then the ratio is 2:1, (value*2). Be aware that interval ratios such as 2.0
(octave), 1.5 (fifth), 1.333 (fourth), 1.25 (third), etc., are Pythagorean intervals. We use the equal
tempered scale in most modern music. If you want to match your output with real instruments
you will run into the Pythagorean comma; an error of drifting pitches. On the other hand, this can
be seen as a powerful feature of computer generated music. We use the equal tempered scale on
pianos because of the limitations of keys and strings. They cannot represent all the pitches
necessary for free just intonation. For a computer system it is actually easier to do free just
intonation than it is to do equal temperament (less math). If you are interested in doing rare or
unusual composition, microtones and free just intonation are areas that until now (and maybe
still) aren't even in the vocabulary of most western composers (noted exceptions: Ben Johnston,
Harry Partch, Ezra Sims).

If, on the other hand, you want to use equal temperament, then MIDI pitches are a solution. Each
MIDI number represents a key on the piano and the tuning is determined by the synthesizer,
which is usually equal. The math that you would use with a proportional system using MIDI

171
pitches is also different. You won't multiply values (well, I guess you could, but this doesn't
make as much sense to me), rather you will add or subtract values. Given a starting point of C4
(MIDI number 60), a fifth above C4 is G4, (MIDI number 67). To get the intervals then you add
7 for fifth, 5 for fourth, 4 for third, -4 for a third down, etc. The inversion of a set of MIDI
intervals is pretty easy: midiArray.neg. This basically takes all values and inverts the
positive/negative sign, such that 5 becomes -5, -10 becomes 10. Negative values will become
positive; -2 becomes 2. (Inversion of a twelve tone matrix is another matter.)

Duration and next event

You would think that proportional durations would make a lot of sense since our system of
notation is based on proportional values (divisions or multiples of a beat). In practice it becomes
very complex very fast. A series of fractions can become extremely complicated and very
difficult to represent in traditional notation. For example, how would you notate this deceptively
simple series of proportional values; 1, 1.5, 1.5, 1.5, 0.5, 0.5 (beginning with 1, quarter note = 60
bpm, or one second): 1 * 1 = 1 (quarter note) * 1.5 = 1.5 (dotted quarter) * 1.5 = 2.25 (half tied
to a sixteenth) * 1.5 = 3.375 (dotted half tied to sixteenth tied to thirty-second) * 0.5 = 1.6875
(??) * 0.5 = 0.84375 (??). How would such a passage be notated in tradition meters? Is this a
failing of the computer to think in human terms, or an elegant demonstration of how narrow
minded most western composers are about rhythm? (Unable to break free of the bias imposed by
our metric system.)

Another problem with a proportional duration scheme is the value 0. In SC a 0 for next event is
legal, and represents a chord or a simultaneous event. But once you hit the value 0 you will never
escape, because 0 * anything is 0. All of the "next" values beyond that will be 0 and therefor part
of a chord; an infinite chord. The solution is to either test the value and give it a non-0 value for
the next choice (but then you will never have more than two notes in a chord), or better, consider
a chord a single event and make the decision about whether or not this event is a chord and how
many elements there are in the chord separate from next event. That is to say, for example, for
each event determine if it is a rest, a single note, or a chord. If it is a chord, determine how many
pitches are in the chord. Make them all the current duration and then determine the next event
that could also be a chord, a rest, or a single pitch.

Next Event

Another method for accommodating rests and simultaneous events is to calculate duration and
next event separately. In this case next event could be in one second while the current event is set
to four seconds. Or next event could be 0, generating simultaneous events, while the duration of
each of the simultaneous events could be different. Rests would occur when the next event is
longer than the duration of the current event.

Non-Sequential Events

It is not necessary to calculate events sequentially. Too often we assume that events should be
linear and successive. Another approach might include picking a point in time where a pitch
should occur rather than pick which pitch should be next. For example, if you first determine the
length of a work (e.g. 4 minutes), you could place items in the stream somewhere along that

172
continuum. Pitch 1 could be placed at 2'23", have a duration of 3 seconds, pitch 2 then might be
placed at 1'13". This poses a problem in regard to coherence. As pointed out earlier we recognize
style sequentially, so it would be difficult to maintain thematic consistency if events are not
calculated sequentially. One solution might include a mixture of sequential and non-sequential
events. Themes or motives may be fleshed out sequentially but placed within the space of time as
described above. However, I think the difference in sequential and non-sequential approaches is
largely academic.

Amplitude

In the case of amplitude a proportional system works well because amplitude is often a function
of the previous value (in the case of cresc., decresc., etc.).

In the systems described above the proportional value is determined from the previous value and
some relationship or formula. In the case of duration another type of proportional value can be
used. The duration could be calculated as a function of the current next event value. That is to
say; if the current "next event" value is 2 seconds then the duration could be 0.5% of 2, or 1
second. In this system rests will result and can be designed into the system (any proportional
duration value that is not 100% will generate a rest).

Rhythmic Inversion

One last note about rhythm and total control. When doing a total control scheme the question of
rhythmic inversion inevitably comes up. It is easy to do an original and a retrograde version of a
rhythmic series, but how do you invert it? Most of the solutions I've read in other texts fail to
capture the sense of rhythmic inversion. It is logical that a set of durations should invert to
something opposite, slow to fast, long values to short values, etc. There are two methods I've
used that satisfy this requirement. Both are problematic.

The first takes a very binary view of the row. It requires that you first decide what the smallest
allowable articulation is. For this example, let's use an eighth note. That means that the densest
measure would be filled with all eighths, the least dense would be a single whole note. (Or even
less dense might be a whole note tied to the previous measure, such that there are no articulations
in the current measure.) The logic that inversion of a dense measure should result in a less dense
measure, and if representing a measure filled with durations as 0s and 1s (very computeresque),
then each possible point of articulation (each eighth note) was either 0 or 1, a 0 meaning no
articulation, a 1 meaning a single articulation. Using this method, four quarter notes would be 1,
0, 1, 0, 1, 0, 1, 0. Two half notes would be 1, 0, 0, 0, 1, 0, 0, 0. The logical inversion then would
be to simply swap 0s for 1s. The quarter note measure would invert to 0, 1, 0, 1, 0, 1, 0, 1, or a
syncopated eighth note passage. The two half notes would invert to 0, 1, 1, 1, 0, 1, 1, 1, or an
eighth rest followed by three eighths, then an other eighth rest followed by three eighth notes.

The first problem with this method is that you are boxed into using only the declared legal
values. The second problem is that inversions may generate an unequal number of articulations,
and will therefore require different sizes of pitch, and dynamic series. (For example, suppose you
were using an 8 member pitch series and a rhythmic series of two half notes, four quarters, and
two whole notes. The rhythmic series uses 8 points of articulation, but the inversion to the

173
rhythmic series would result in 24 articulations. Where are you going to come up with the extra
notes?) One solution is to make sure you always use exactly half of all the articulation points, so
that the inversion would be the other half. The total number of values would always remain the
same. But this seems to be a narrow constraint.

Another solution, is to represent each duration as a proportion of a given pulse. Quarter notes are
1.0, half notes are 2.0, eighth notes are 0.5, etc. The inversion to a rhythm then would be the
reciprocal: 2 inverts to 1/2, 1/2 inverts to 2. Using this scheme maintains the number of
articulations in a series and satisfies the logic of an inversion (fast to slow, slow to fast, dense to
less dense; an active dense line, say, all sixteenth notes, would invert to a relaxed long line, all
whole notes). The only problem with this system is that the actual amount of time that passes
during an inversion may be radically different from the original row. If you are working with
several voices this could throw the other elements of the row out of sync.

You could force the proportions to fit into a prescribed time frame using the normalizeSum
message. Take, for example, the array [1, 1.5, 0.25, 4, 0.25]. They total 6 seconds in duration.
The "reciprocal" inversion would be [ 1, 0.666667, 4, 0.25, 4 ], which totals 9.91667 seconds in
duration. Using normalizeSum reduces each element in the array such that they all total 1.0. The
results of this action ([ 1, 0.666667, 4, 0.25, 4 ].normalizeSum) is [ 0.10084, 0.0672269,
0.403361, 0.0252101, 0.403361 ]. Those values total 1.0, and we want to fit it into a duration of
6 seconds. This is done by simple multiplication: [ 0.10084, 0.0672269, 0.403361, 0.0252101,
0.403361 ]*6 = [ 0.60504, 0.403361, 2.42017, 0.151261, 2.42017 ].

The results are usually pretty complex, and similar to the proportional durations mentioned
above they quickly stray from most composer's narrow conception of rhythmic possibilities. This
is a good thing. Example 21.1 shows how this could be done in SC.

Ex. 21.1

var rhythmArray, orLength, inversion;

rhythmArray = [1, 1.5, 2, 1.25, 0.25, 0.25, 1.5, 0.333];


orLength = rhythmArray.sum;
inversion = rhythmArray.reciprocal.normalizeSum*orLength;
inversion.postln;
rhythmArray.sum.postln;
inversion.sum.postln;

The student who suggested the reciprocal method of inversion believes my attempt at preserving
the total duration is a convolution and that when you invert time you invert time and should get a
different total duration. I guess if you take this view then you just need to make sure you invert
all voices at once. That, or not care that the outcome in other voices don't match.

174
27. Music Driven by Extra-Musical Criteria, Data Files

27 Assignment
a) Pick a favorite text, insert it into the array string of EMC Patch. Then devise a pitchMap
and nextMap to correspond with the text.

Extra Musical Criteria

In this chapter we will examine the possibilities of musical choices that are linked to extra-
musical criteria. It would be safe to say that nearly all music has some extra-musical criteria
influencing the composer's choices, but here we will strengthen that link to the point where
nearly all elements of music are based on outside phenomena. These criteria might include
natural data such as electrical impulses from plants, position of stars, a mountain silhouette, or
rain and snow patterns. They could include human structures such as other music, text, bus
timetables; or masked input such as an audience whose motion through a performance space is
being mapped to pitch, amplitude, tone, etc.

Any data stream, whether read from an existing file or read from a real time input source such as
a mouse or video link, can be used in influencing musical parameters. SC provides a number of
outside control sources such as MouseX and Y, Wacom pad controls (a pad that generates input
from an electronic pencil), MIDI input, data from a file, etc. For simplicity I will use text as a
data stream to demonstrate the technique and leave you to explore other possibilities.

The reasoning is that the patterns and structure can be transferred from the extra-musical source
to the composition. A star map, for example, would provide x and y locations that would
generate a fairly even distribution of events, while a mountain silhouette would provide a smooth
contiguous stream of events.

In addition to the structure provided by outside criteria, there is a philosophical connection that
brings additional meaning to the composition, e.g. if the pitch choices were taken from the voice
print of one of the musicians. History provides many examples of this type of programmatic
connection, and more specifically those where text has influenced composition. Examples
include not only an attempt to paint the character of a word musically, but also to generate the
exact replica of the word in musical symbols, as in Bach's double fugue which begins with the
four pitches Bb, A, C, B (in German musicianship: B, A, C, H).

The simplest method for linking text to music is to give each letter a value; a = 0, b = 1, etc., z =
23. This would provide a two-octave range of pitches. Letters of the alphabet are represented in
computer code as ascii values. That is, the computer understands the character "a" as an integer.
This is a convenient coincidence for us. They are already ascii numbers in the computer, we just
need to adjust them to the correct range.

Text

Text, words and sentences, or "strings" in most computer languages are stored as an array of
characters. Earlier we used strings in messages such as "This is a string".post. In computer

175
memory the entire sentence is a single array, each character is stored in one position of the array.
"T" is position 0, an "s" is in positions 3, 6, and 10. Try this code to confirm the array positions.
Change the 2 in the .at message to see which characters are associated with which array
positions. (Remember, start with 0.)
27.1 Array string

a = "Test string";
a.at(2).postln;

The .digit or .ascii message can be used to convert a character into a digit. With .digit both "a"
and "A" = 10, "z" and "Z" = 35. The message .ascii returns the actual ascii value: A = 65, a = 97,
Z = 90, z = 122. Try these lines of code again to confirm the ascii correlation.
27.2 ascii values

a = "Test string";
a.at(2).ascii.postln;
a.at(2).digit.postln;
a.do({arg each; each.post; " ".post; each.ascii.postln;})

The range of ascii values (for upper case) are also conveniently matched with midi values: 65
(midi number for F4) to 90 (midi number for F6). But the lower case values are a little out of
range. You can just use upper case, or you can scale lower case, or both, using simple addition
and subtraction.

The problem with this direct correlation (A = 60, Z = 90) is fairly obvious: you are stuck with the
value of the letter. The letters a and e will both always be low notes. And since a and e are both
very common letters a higher percentage of those pitches will result.

Mapping

A map will allow greater control over the correlation between the character. You assign values to
the input regardless of the intrinsic value of each item in the stream. This way you can control
the nature of the composition while retaining the link to the patterns in the stream. In text, for
example, we know that vowels occur about every other letter. Sometimes two in a row,
occasionally they may be separated by as many as four consonants. With this understanding we
can then use a map to assign specific pitches to vowels and achieve (for example) a sense of
tonality, if a = C4, e = E4, i = G4, o = C5 (a = 60, e = 64, i = 67, o = 72), etc. You can also
confine your map to characters in use, omitting (or including) characters such as spaces,
punctuation, etc.

One method for creating a map would be to use an IdentityDictionary, shown below. The
variable pitchMap is assigned the IdentityDictionary array that contains pairs of elements; the
original character and its associated value. The association is made with the syntax
"originalValue -> associatedValue" or in this case "$b -> 4" which reads "make the character b
equal to the integer 4."

176
27.3 pitchMap

pitchMap = IdentityDictionary[
$H -> 6, $x -> 6, $b -> 6, $T -> 6, $W -> 6,
$e -> 11, $o -> 11, $c -> 11, $, -> 11, $. -> 11,
$n -> 3, $y -> 3,
$m -> 4, $p -> 8, $l -> 9
];

The numbers associated with the characters are then used as MIDI pitches, after being raised to
the appropriate octave.

After using the IdentityDictionary for a few projects I found it cumbersome to change values.
(Too much typing.) So I settled on a computationally less efficient algorithm that saved time
typing. It uses a two dimensional array. The first element of the second dimension is the list of
characters comprising a string, the second element is the associated value. They are parsed using
a .do function which looks at each of the first elements and if a match is found (using .includes)
the mappedValue variable is set to the second element.
27.4 mapping array

var mappedValue, intervMap;

intervMap = [
["ae", 2], ["io", 4], [" pst", 5], ["Hrn", -2],
["xmp", -1], ["lfg", -4], ["Th", -5], [".bdvu", 1]
];

intervMap.do({arg item;
if(item.at(0).includes($o),
{mappedValue = item.at(1)})
});

Here is a patch which controls only pitched elements in an absolute relationship (that is, actual
pitches rather than intervals).
27.5 EMC pitch

(
var noteFunc, blipInst, midiInst, channel = 0, port = 0, prog = 0,
intervMap, count = 0, ifNilInt = 0, midin = 0, inputString;

//The input stream.

inputString = "Here is an example of mapping. The, them, there, these,"


"there, then, that, should have similar musical interpretations."
"Exact repetition; thatthatthatthatthatthat will also"
"be similar.";

//intervMap is filled with arrays containing a collection of


//characters and a value. In the functions below the character
//strings are associated with the numbers.

intervMap = [

177
["ae", 2], ["io", 4], [" pst", 5], ["Hrn", 7],
["xmp", 1], ["lfg", 3], ["Th", 6], [".bdvu", 11]
];

"// [Char, Interval, ifNilInt, midi interval, octave, midi]".postln;

noteFunc = Pfunc({var parseInt, octave;

//Each array in the intervMap is checked to see if the


//character (inputString.wrapAt(count)) is included. If
//it is then parseInt is set to the value at item.at(1)

intervMap.do({arg item;
if(item.at(0).includes(inputString.wrapAt(count)),
{parseInt = item.at(1)})
});

//If parseInt is notNil, midin is set to that.


//ifNilInt is for storing each parseInt to be used if
//no match is found and parseInt is nil the next time around.

if(parseInt.notNil,
{midin = parseInt; ifNilInt = parseInt},
{midin = ifNilInt}
);

octave = 60;

"//".post; [inputString.wrapAt(count), parseInt,


ifNilInt, midin, octave/12, midin + octave].postln;

count = count + 1;

midin + octave
});

//Two instruments you can use. Blip, or MIDI. If you use MIDI you
//have to set the channel, port, and program variables. They
//should be channel 0, port 0, and program will depend on
//what instrument you want to use on the outboard synth.

blipInst = { arg freq, amp, pan, sustain; var env1;


env1 = Env.perc(0.001.rand, sustain );
Pan2.ar(Blip.ar(freq, 3.rand + 2, mul: EnvGen.kr(env1) * amp), 0)};

midiInst = { arg midinote, amp, sustain, outersynth;


MIDIOut(port).noteOn(channel, midinote, amp*127);
outersynth.sched(sustain, {MIDIOut(port).noteOn(channel, midinote, 0);
});
nil };

Pbind(
\midinote, noteFunc,
\dur, 0.125,
\amp, 0.6,
\ugenFunc, midiInst
).play;

178
127.do({arg item; MIDIOut(port).noteOn(prog, item, 0)})

I admit it's not very interesting. That is partially because pitch values alone usually do not
account for a recognizable style. We are more accustomed to recognizing style as a certain level
of dissonance, not pitch sequences alone. Dissonance is a function of interval distribution, not
pitch distribution. To achieve a particular balance of interval choices the intervMap should
contain proportional interval values rather than absolute pitches. The numbers may look pretty
much the same, but in the actual parsing we will use midin = parseInt + midin%12 (thus parseInt
becomes an interval) rather than midin = parseInt. Interest can also be added by mapping pitches
across several octaves, or mapping octave choices to characters.

It is also more interesting if you control duration, sustain, and amplitude. Any aspect of
composition can be controlled inside the Pbind using this method of mapping. Pbind uses what is
called an environment to generate events. The environment comprises a complete set of symbols
that are matched with values or functions. These are combined when generating an event. Values
that are not given explicitly in the Pbind are supplied by the environment, much like default
arguments. Here is an incomplete list of environment values.

tempo = nil
dur = 1.0
sustain = dur (basically, there are two other operators I'm leaving off here)
amp = db.dbamp
db = -20.0
velocity = 64
pan = 0.0
channels = 2
stepsPerOctave = 12.0
octave = 5.0
scale = #[0, 2, 4, 5, 7, 9, 11]
midinote = (a rather complex formula using note, octave, root, and divisions)
freq = (basically midinote.midicps)
env = Env.asr(0.01, 1.0, 0.5)
ugenFunc = (a formula using orchestra instruments)

Note the difference between duration and sustain. (They are confusing to me, and I've suggested
the author change them, but this is what they are for now.) Duration actually means how long
before the next event occurs. (To me, this is not duration, but next event.) Sustain is the actual
length of the event.

In addition to the existing Pbind symbols you can create your own, which then can be shared in
the Pbind and in the functions used by the Pbind.

Here is a version of the EMC patch that controls pitch, duration, sustain, and amplitude.

179
EMC Total Control

(
var noteFunc, blipInst, midiInst, channel = 0, port = 0, prog = 0,
intervMap, count = 0, ifNilInt = 0, midin = 0, ifNilDur = 1,
durMap, durFunc, ifNilSus = 1, susMap, susFunc, ifNilAmp = 0.5,
curAmp = 0.5, ampMap, ampFunc, inputString;

//The input stream.

inputString = "Here is an example of mapping. The, them, there, these,"


"there, then, that, should have similar musical interpretations."
"Exact repetition; thatthatthatthatthatthat will also"
"be similar.";

//intervMap is filled with arrays containing a collection of


//characters and a value. In the functions below the character
//strings are associated with the numbers.

intervMap = [
["ae", 6], ["io", 9], [" pst", 1], ["Hrn", -3],
["xmp", -1], ["lfg", -4], ["Th", -5], [".bdvu", 1]
];

durMap = [
["aeiouHhrsnx", 0.125], ["mplf", 0.5], ["g.T,t", 0.25],
["dvc", 2], [" ", 0]
];

susMap = [
["aei ", 0.125], ["ouHh", 0.1], ["rsnx", 0.1], ["mplf", 0.5], ["g.T,t",
0.25],
["dvc", 2]
];

ampMap = [
["aeHhrsnx ", 0.8], ["ioumplfg.T,tdvc", 1.25]
];

noteFunc = Pfunc({var parseInt, octave = 36;

//Each array in the intervMap is checked to see if the


//character (inputString.wrapAt(count)) is included. If
//it is then parseInt is set to the value at item.at(1)

intervMap.do({arg item;
if(item.at(0).includes(inputString.wrapAt(count)),
{parseInt = item.at(1)})
});

//If parseInt is notNil, midin is set to that plus previous


//midin. ifNilInt is for storing each parseInt to be used if
//no match is found and parseInt is nil.

if(parseInt.notNil,
{midin = parseInt + midin%48; ifNilInt = parseInt},
{midin = ifNilInt + midin%48}
);

180
"//".post; inputString.wrapAt(count).postln; "//".post;
// [parseInt, ifNilInt, midin, octave/12, midin + octave].postln;

midin + octave
});

durFunc = Pfunc({var parseDur, nextDur;

durMap.do({arg item;
if(item.at(0).includes(inputString.wrapAt(count)),
{parseDur = item.at(1)})
});

if(parseDur.notNil,
{nextDur = parseDur; ifNilDur = parseDur},
{nextDur = ifNilDur}
);
// [parseDur, nextDur, ifNilDur].postln;
nextDur
});

susFunc = Pfunc({var parseSus, nextSus;

susMap.do({arg item;
if(item.at(0).includes(inputString.wrapAt(count)),
{parseSus = item.at(1)})
});

if(parseSus.notNil,
{nextSus = parseSus; ifNilSus = parseSus},
{nextSus = ifNilSus}
);
// [parseSus, nextSus, ifNilSus].postln;
nextSus
});

ampFunc = Pfunc({var parseAmp;

ampMap.do({arg item;
if(item.at(0).includes(inputString.wrapAt(count)),
{parseAmp = item.at(1)})
});

if(parseAmp.notNil,
{curAmp = curAmp*parseAmp; ifNilAmp = parseAmp},
{curAmp = curAmp*ifNilAmp}
);

count = count + 1;
if(0.5.coin, {curAmp = rrand(0.2, 0.9)});
// [parseAmp, curAmp, ifNilAmp].postln;

curAmp.wrap(0.4, 0.9)
});

//Two instruments you can use. Blip, or MIDI. If you use MIDI you have to set
//the channel, port, and program variables. They should be channel 0,

181
//port 0, and program will depend on
//what instrument you want to use on the outboard synth.

blipInst = { arg freq, amp, pan, sustain; var env1;


env1 = Env.perc(0.001.rand, sustain );
Pan2.ar(Blip.ar(freq, 3.rand + 2, mul: EnvGen.kr(env1) * amp), 0)};

midiInst = { arg midinote, amp, sustain, outersynth;


MIDIOut(port).noteOn(channel, midinote, amp*127);
outersynth.sched(sustain, {MIDIOut(port).noteOn(channel, midinote, 0);
});
nil };

Pbind(
\midinote, noteFunc,
\dur, durFunc,
\sustain, susFunc,
\amp, ampFunc,
\ugenFunc, midiInst
).play;

127.do({arg item; MIDIOut(port).noteOn(prog, item, 0)})

Working With Files

It is not always practical to type the text or data values you want to use into the actual code file.
Once you have devised an acceptable map for text you can consider the map the composition and
the text a modular component. In this case a method for reading the text as a stream of data into
the running program is required. With this functionality in place the mapping composition can
exist separate from the files that contain the text to be read. SC has standard file management
tools that can be used for this purpose.

A word about file path names. (And I'm speaking from empirical experience, not formal
training.) When SC (and most programs) requires data from a file it first looks for the file in the
directory where SC resides. If the file it needs is in the same folder as SC then the file name
alone can be used. If it resides anywhere else a folder hierarchy must be included. The hierarchy
is indicated with folder names separated by a colon. If the data file resides in a folder which is in
the folder where SC resides, then the pathname can be given beginning with a colon, then that
folder. If the file resides outside the SC folder, above the folder or in a folder that is above the
SC folder, then you need to give the entire path name17 beginning with the drive name. It is
possible to indicate folders above the SC folder using two colons for the directory above, three
for two directories above. So a file in the same folder as SC is simply "MyFile", if it is in a
subfolder it might be ":Data Files:MyFile", if in another area then perhaps "::Audio:MyFile", or
"MacintoshHD:Documents:Computer Music:Data Files:MyFile".

17
One way to find the exact file path name is with this line: "GetFileDialog.new.path.postln;". See also the
discussion of GetFileDialog below.

182
There are a number of data types and file types. For now we'll try a text file since we used them
with the project above, and they are fairly straightforward. But other types of data files can be
used.

The text file can be created with SC (it is, after all, a simple text editor and SC files are text
only), MS Word (but you must save as text only), or SimpleText. Use one of these programs to
create a file containing the text you want to use, then save it in the SC folder.

To open and read the file, first declare a variable to hold the file pointer. Then use File() to open
the file and identify what mode you will be using (read, write, append, etc.). In this example we
use the read mode, or "r."

Once you have opened the file you could retrieve each character one at a time using .getChar, but
I would suggest reading the entire file and storing it in an array, since in the EMC patch above
the text is stored in an array. Here is the code, assuming the text file is named "Text File." This
section can be inserted in the EMC patch above (minus the input.postln) in place of the input =
"Here is . . . etc.
27.6 reading a file

(
var input, filePointer; //declare variables
filePointer = File("Test File", "r"); //create a file for filePointer
if(filePointer.pos.notNil, //if filePointer is not nil
{input = filePointer.readAllString}, //read and store in input
{"File not found".postln} //otherwise post a message
);
filePointer.close;
input.postln;
)

You may not always know the exact name of the file, or you may be choosing different files for
each run. Or you may want to make the program flexible enough to send to other users. In this
case, you can use a GetFileDialog to retrieve the pathname of the file.
27.7 reading a file

(
var input, filePointer, fileName; //declare variables
fileName = GetFileDialog.new.path;
filePointer = File(fileName, "r"); //create a file for filePointer
if(filePointer.pos.notNil, //if filePointer is not nil
{input = filePointer.readAllString}, //read and store in input
{"File not found".postln} //otherwise post a message
);
filePointer.close;
input.postln;
)

183
28. Markov Chains, Numerical Data Files

28 Assignment

a) Generate a second order transition table for pitch in the first eight measures of Mary Had
a Little Lamb, given below in the key of C.

In several chapters we have brushed up against artificial intelligence. It seems to me that


artificial intelligence is simply describing human phenomena to a computer in a language it
understands: numbers, probabilities, and formulae. Any time you narrow the choices a computer
makes in a musical frame you are in a sense teaching it something about music, and it is making
an "intelligent" or informed choice. This is true with random walks; if you narrow the choice to a
MIDI pitch, for example, you have taught the patch (by way of converting MIDI values to cps)
about scales, ratios, intervals, and equal tempered tuning. If we limit random choices to a C-
major scale, then the cpu is "intelligent" about the whole step and half step relationships in a
scale. If we biased those choices such that C was chosen more often than any other pitch, then
the cpu understands a tonal logic. However, if we biased the choices strongly toward G, but still
used a C-major scale, we would get a myxolidian tonality. This type of bias is at the heart of
artificial intelligence; giving the cpu a map of human bias. The map is usually in the form of
ratios and probabilities. In a simple biased random choice there is only one level of probability; a
probability for each possible choice in the scale. This is known as a Markov Process with a
zeroth order probability.

A zeroth probability system will not give us a sense of logical progression, since musical lines
are fundamentally reliant on relationships between pitches, not the individual pitches themselves
or general distribution of pitches in a piece. We perceive melody and musical progression in the
relationship of the current pitch to the next pitch, and the last three pitches, and the pitches we
heard a minute ago. In order for you to describe a melody, you have to describe the connections
between pitches, i.e. intervals.

To get a connection between pitches we need to use a higher order of probability; first or second
order probability at least. This is one of the most fascinating techniques in Computer Music (to
me at least). It is called a Markov Chain of first or second (or higher) ordered probability. The
technique is described in Computer Music by Charles Dodge (page 283) and Moore's Elements
of Computer Music (page 429). I suggest you read those chapters, but I will also explain it in my
own terms.

184
The way to describe the connection between two pitches is to have a chart of probable next
pitches given the current pitch. Take for example the pitch G in the key of C. If you wanted to
describe a tonal system in terms of probability you would say there is a greater chance that C
follows G (resulting in a V-I relationship) than say F would follow G (a retrogressive V-IV). If
the current pitch is F on the other hand, then there is a greater chance that the next pitch is E
(resolution of the IV) than C (retrogressive). Markov Chains are not intended solely for tonal
musics. In non-tonal musics you might likewise describe relationships by avoiding the
connection G to C. So if the current pitch is G and avoiding a tonal relationship is the goal you
might say there is a very small chance that the next pitch be C, but a greater chance that the next
pitch is D-sharp, or A-flat.

You can describe any style of music using a Markov Chain. You can even mimic an existing
composers style based on an analysis of existing works. For example you could analyze all the
tunes Stephen Foster wrote, examining the pitch G (or equivalent in that key) and the note that
follows each G. You would then generate a chart with all the possible choices that might follow
G. Count each occurrence of each of those subsequent pitches in his music and enter that number
in the chart. This would be a probability chart describing precisely Stephen Foster's treatment of
the pitch G, or the fifth step of the scale. This is called a probability of 1st order.

If we have determined the probabilities of one pitch based on our analysis, the next step is to
compute similar probabilities for all possible current pitches and combine them in a chart. This is
called a transition table. To create such an analysis of the tune "Frere Jacque" you would first
create the chart with all the possible current pitches in the first column and all next pitches in the
row above each column. The first pitch is C and it is followed by D. We would represent this
single possible combination with a 1 in the C row under the D column.

C D E F G
C 0 1 0 0 0
D 0 0 0 0 0
E 0 0 0 0 0
F 0 0 0 0 0
G 0 0 0 0 0

Next we count up all the times C is followed by D and enter that number (2) in that column. Next
we examine all other Cs and count the number of times C is followed by C, E, F, G, and enter
each of those totals into the table. We do the same for the pitch D, then E, and so on. This is the
resulting chart or transition table:

185
C D E F G Total
C 1 2 1 0 0 4
D 0 0 2 0 0 2
E 2 0 0 2 0 4
F 0 0 0 0 2 2
G 0 0 1 0 0 1

For each row the total combinations are listed. The probability for each row is calculated by the
number of actual occurrences in that column divided by the total possible outcomes, such that the
C column values will be 1/4, 2/4, 1/4.

C D E F G Total
C .25 .5 .25 0 0 4
D 0 0 1.0 0 0 2
E .5 0 0 .5 0 4
F 0 0 0 0 1.0 2
G 0 0 1.0 0 0 1

This is a first order transition table18. Because we are using only the previous and next note (one
connection) it will still lack a convincing sense of melodic progression. To imitate the melody
we really need to look at patterns of two or three notes. This brings us to a second order Markov
Chain. A second order adds one level to the sequence. That is to say, given the last two pitches,
what is the probability of the next pitch being C, D, etc. Here is the same chart expanded to
include all of "Frere Jacque" and a second order of probability. There are 36 combinations, but
not all of them occur (e.g. C-A), and don't need to be included on the chart, so I've removed
those combinations.

C D E F G A Total
C-C 1 2 3
C-D 2 2
C-E 1 1

18
One could argue that a set of probabilities describing just intervals is already a 1st order transition table, since even
a single interval describes the relationship between two pitches.

186
C-G 2 1 3
D-E 2 2
E-C 2 1 1 4
E-F 2 1
F-E 2 2
F-G 1 1 2
G-C 1 1
G-E 1 1
G-F 2 2
G-G 1 1
G-A 2 2
A-G 2 2
Total 9 1 6 4 8 2 30

Here are some guidelines: Note that I've totaled all the combinations at the bottom. This is a
quick way to check if you have the correct number of total connections. The total should equal
the number of notes in the piece minus two (because the first two don't have a connection of
three items—or second order). The other thing you have to watch out for is a broken link. A
broken link is a reference to a connection that doesn't have a probability row on the chart. Take
for example the combination C-C. If you entered a probability for the F column in the C-C row,
then the combination C, C, F could result. But there is no row of probabilities for C-F, and the
program would return a nil value and crash (or get stuck in a loop). I don't have a quick or clever
way to check to make sure you don't have any bad leads. You just have to check it carefully. (I
guess a way to do it would be a systematic run through all possible values to make sure they all
link to other values.)

Here is the chart with percentages.


28.1 Frere Jacque markov chart

C D E F G A Total
C-C .33 .66 3
C-D 1.0 2
C-E 1.0 1
C-G .66 .33 3
D-E 1.0 2
E-C .5 .25 .25 4
E-F 1.0 1
F-E 1.0 2
F-G .5 .5 2
G-C 1.0 1
G-E 1.0 1
G-F 1.0 2
G-G 1.0 1

187
G-A 1.0 2
A-G 1.0 2
Total 9 1 6 4 8 2 30

The biggest problem with this type of system is the memory requirements. If, for example, you
were to do a chart for the piano works of Webern, assuming a four octave range, 12 pitches for
each octave, second order probability would require a matrix of 110,592 references for pitch
alone. If you expanded the model to include rhythm and instrument choice, dynamic and
articulation, you could be in the billions in no time. So there needs to be efficient ways of
describing the matrix. That is why in the Foster example mentioned below I do a few confusing,
but space saving short cuts. The chart above for "Frere Jacque" is demonstrated in the file Simple
Markov. Following are some explanations of the code.
28.2 transTable

//A collection of the pitches used

legalPitches = [60, 62, 64, 65, 67, 69];

//An array of arrays, representing every possible previous pair.

transTable = [
[0, 0], //C, C
[0, 1], //C, D
[0, 2], //C, E
[0, 4], //C, G
[1, 2], //D, E
[2, 0], //E, C
[2, 3], //E, F
[3, 2], //F, E
[3, 4], //F, G
[4, 0], //G, C
[4, 2], //G, E
[4, 3], //G, F
[4, 4], //G, G
[4, 5], //G, A
[5, 4] //A, G
];

It would be inefficient to use actual midi values, since so many midi values are skipped in a tonal
scheme. So legalPitches is used to describe all the pitches I will be working with and the actual
code looks for and works around array positions, not midi values. (That is, array positions which
contain the midi values.)

transTable and beyond describe the first column of my transition table. Each of the possible
previous pitches are stored in a two dimensional array (arrays inside an array).

The value I use to compare and store the current two pitches is currentPair. It is a single array
holding two items, the first and second pitch in the pair I will use in the chain. At the beginning
of the program they are set to 0, 0, or C, C.

188
Next I have to match the currentPair with the array transTable. I do this by parsing the entire
array transTable with a .do function. In this function each of the two position arrays will be
compared to the variable currentPair, which is also a two position array. When a match is found
the index of that match (or position in the array where it was found) is stored in nextIndex. In
other words, I have found the index position of the currenPair. This is necessary because I have
pared down the table to include only combinations I'm actually using. Otherwise I could
probably just use the actual values of the previous pair (00, 01, 02, etc., to 10, 11, 12) in some
scheme to determine the next value. But in this simple exercise that would be very inefficient
and take a lot of space (36 arrays).
28.3 Parsing the transTable

transTable.do({arg index, i; if(index == currentPair,


{nextIndex = i; true;}, {false})});

Next I describe the index for each previous pair. If, for example, the current pair was D, E. Their
values in the transTable would be [1, 2], and the lines of code above would find a match at array
position 4 (remember to count from 0). That means I should use the probability array at position
4 in the chart below. In this chart it says that I have a 100% chance of following the D, E in
currentPair with a C.
28.4 Probability chart

nextPitchProbability =
[
//C D E F G A
[0.00, 0.33, 0.00, 0.00, 0.66, 0.00], //C, C
[0.00, 0.00, 1.00, 0.00, 0.00, 0.00], //C, D
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00], //C, E
[0.66, 0.00, 0.00, 0.00, 0.00, 0.33], //C, G
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //D, E
[0.50, 0.00, 0.25, 0.00, 0.25, 0.00], //E, C
[0.00, 0.00, 0.00, 0.00, 1.00, 0.00], //E, F
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //F, E
[0.00, 0.00, 0.50, 0.00, 0.50, 0.00], //F, G
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //G, C
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00], //G, E
[0.00, 0.00, 1.00, 0.00, 0.00, 0.00], //G, F
[0.00, 0.00, 0.00, 0.00, 0.00, 1.00], //G, G
[0.00, 0.00, 0.00, 0.00, 1.00, 0.00], //G, A
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00] //A, G

];

The choice is actually made using windex. The function windex (weighted index) takes an array
of probabilities as its first argument. The array nextPitchProbability is an array of arrays, and I
want to use one of those arrays as my probability array, in this case the array at position 4. The
way I identify the array within the array is nextPitchProbability.at(4). So I could use this syntax
for the array 4: windex(nextPitchProbability.at(4)). The return from windex is an array position,
which I will store in nextIndex, and the variable that tells me which array to use in
nextPitchProbility is nextIndex, which results in this line:

189
nextPitch = windex(nextPitchProbability.at(nextIndex));

The variable nextPitch is an array position that can then be used in conjunction with legalPitches
to return the midi value for the correct pitch: legalPitches.at(nextPitch). It is also used for some
necessary bookkeeping. I need to rearrange currentPair to reflect my new choice. The value in
the second position of the array currentPair needs to be moved to the first position, and the
nextPitch value needs to be stored in the second position of the currenPair array. (In other words,
currentPair was D, E, or array positions 1, 2, and I just picked a C, according to the table, or a 0.
So what was [1, 2] needs to be changed to [2, 0] for the next pass through the function.)

currentPair.put(0, currentPair.at(1));
currentPair.put(1, nextPitch);

A more complex example of a transition table and Markov process is demonstrated in the file
Foster Markov, which uses the chart for Stephen Foster tunes detailed in Dodge's Computer
Music on page 287. I wrote this a while back, and I think there are more efficient ways to do the
tables (and I make some confusing short cuts), but it does work.

Here is the entire Simple Markov patch:


28.5 Simple Markov

var legalPitches, transTable, previousPitch, currentPair, nextIndex,


nextPitchProbability, pchoose, blipInst, envelope, pClass, count,
nextPitch;

//currentPair represents the last two pitches, count and pClass are just
//for user interface; printing the pitch inforation.

currentPair = [0, 0];


count = 1;
pClass = #["C", "D", "E", "F", "G", "A"];

blipInst = { arg freq, amp, pan, dur; var env1;


env1 = Env.perc(0.001.rand, max(0.5, dur) );
Pan2.ar(Blip.ar(freq, 3.rand + 2, mul: EnvGen.kr(env1) * amp), pan)};

"//".post;

//pchoose is the function for picking the next value.

pchoose =
{

//A collection of the pitches used

legalPitches = [60, 62, 64, 65, 67, 69]; //C, D, E, F, G, A

//An array of arrays, representing every possible previous pair.

transTable = [

190
[0, 0], //C, C
[0, 1], //C, D
[0, 2], //C, E
[0, 4], //C, G
[1, 2], //D, E
[2, 0], //E, C
[2, 3], //E, F
[3, 2], //F, E
[3, 4], //F, G
[4, 0], //G, C
[4, 2], //G, E
[4, 3], //G, F
[4, 4], //G, G
[4, 5], //G, A
[5, 4] //A, G
];

//All arrays in transTable are compared to current pair


//and the index of a match is returned.

transTable.do({arg index, i; if(index == currentPair,


{nextIndex = i; true;}, {false})});

nextPitchProbability =
[
//C D E F G A
[0.00, 0.33, 0.00, 0.00, 0.66, 0.00], //C, C
[0.00, 0.00, 1.00, 0.00, 0.00, 0.00], //C, D
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00], //C, E
[0.66, 0.00, 0.00, 0.00, 0.00, 0.33], //C, G
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //D, E
[0.50, 0.00, 0.25, 0.00, 0.25, 0.00], //E, C
[0.00, 0.00, 0.00, 0.00, 1.00, 0.00], //E, F
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //F, E
[0.00, 0.00, 0.50, 0.00, 0.50, 0.00], //F, G
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00], //G, C
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00], //G, E
[0.00, 0.00, 1.00, 0.00, 0.00, 0.00], //G, F
[0.00, 0.00, 0.00, 0.00, 0.00, 1.00], //G, G
[0.00, 0.00, 0.00, 0.00, 1.00, 0.00], //G, A
[0.00, 0.00, 0.00, 1.00, 0.00, 0.00] //A, G

];

//nextIndex from above is used to send the array from


//nextPitchProbabilty which corresponds to the transTable.

nextPitch = windex(nextPitchProbability.at(nextIndex));

pClass.at(nextPitch).post;
" ".post;
if((count%20) == 0, {"\n//".post};);
count = count + 1;

//bookkeeping: the value that was in currentPair position


//0 is replaced with currentPair at position 1 and position
//1 is replaced with nextPitch, the two combine for the
//next round as currentPair.

191
currentPair.put(0, currentPair.at(1));
currentPair.put(1, nextPitch);

//legalPitches at nextPitch is returned

legalPitches.at(nextPitch)
};

Pbind(
\dur, 0.5,
\midinote, pchoose,
\db, -10,
\pan, 0.5,
\ugenFunc, blipInst
).play(Event.protoEvent, duration: 120);

Data Files, Data Types

As with the previous section, it might be useful to store the data for transition tables in a separate
file so that the program can exist separate from the specific tables for each composition. The data
files can then be used as a modular component with the basic Markov patch.

Text files, as discussed above, contain characters. But the computer understands them as integers
(ascii numbers). The program you use to edit a text file converts the integers into characters. You
could use SimpleText to create a file that contained integers representing a transition table, but
the numbers are not really numbers, but rather characters. To a cpu "102" is not the integer 102,
but three characters (whose ascii integer equivalents are 49, 48, and 50) representing 102. The
chart below shows the ascii numbers and their associated characters. Numbers below 32 are non-
printing characters such as carriage returns, tabs, beeps, and paragraph marks. The ascii number
for a space (32) is included here because it is so common. This chart stops at 127 (max for an 8
bit number) but there are ascii characters above 127. The corresponding characters are usually
diacritical combinations and Latin letters.

032 033 ! 034 " 035 # 036 $ 037 % 038 & 039 '

040 ( 041 ) 042 * 043 + 044 , 045 - 046 . 047 /

048 0 049 1 050 2 051 3 052 4 053 5 054 6 055 7

056 8 057 9 058 : 059 ; 060 < 061 = 062 > 063 ?

064 @ 065 A 066 B 067 C 068 D 069 E 070 F 071 G

072 H 073 I 074 J 075 K 076 L 077 M 078 N 079 O

080 P 081 Q 082 R 083 S 084 T 085 U 086 V 087 W

088 X 089 Y 090 Z 091 [ 092 \ 093 ] 094 ^ 095 _

192
096 ` 097 a 098 b 099 c 100 d 101 e 102 f 103 g

104 h 105 i 106 j 107 k 108 l 109 m 110 n 111 o

112 p 113 q 114 r 115 s 116 t 117 u 118 v 119 w

120 x 121 y 122 z 123 { 124 | 125 } 126 ~ 127

If you would like to test this, create a text file using SC, SimpleText, or MS Word (save it as text
only) and run these lines of code.
28.6 test ascii

var fp;
fp = File("Testascii", "r"); //open a text file
fp.length.do({a = fp.getInt8; a.postln}); //read entire file as integers

Data appropriate for a transition table (integers or floats) could be opened in a text editor, but it
would display gibberish, not the transition data. So the question arises, how do you create a data
file? It is not as simple as a text file. It must be done with a program that writes and reads data
streams other than characters. SC can create such files. (But be sure to read ahead, there is a
simpler method.)

The transition tables above used integers, but the probability table used floating points. It is
important to distinguish between the two. Below are examples of code for writing, and reading
files containing integers and floating point values. There are three messages for integers;
.putInt8, .putInt16, and .putInt32 for 8 bits (a byte), 16 bits (two bytes), and 32 bits (four bytes).
Each size has a limited capacity. An 8 bit integer can be as large as 128, 16 bit can be as large as
32,768, 32 bit has a 2 billion+ capacity. There are two messages for floats; .putFloat and
.putDouble. A "Double" is larger and therefor more precise, but they take up twice the space, and
floats should be sufficient for what we are doing. For characters the messages .putChar and
.putString can be used.

It is important to understand these data types because you need to read the same type that you
write. If you write data as 16 bit integers but read them as 8 bit integers the numbers will not be
the same. Following are code segments that write and read floating-point values and integers.
The second two examples make use of PutFileDialog, similar to GetFileDialog, but with a
default prompt and default file name.
28.7 data files

var fp, data;


fp = File("TestInt", "w"); //open a file
data = [65, 66, 67, 68, 69, 70, 71];
data.do({arg eachInt; fp.putInt16(eachInt)}); //place each int in file
fp.close;

var fp, data;


fp = File("TestInt", "r"); //open a file
data = fp.readAllInt16; //read all as Int array
data.postln;

193
fp.close;

var fp, data, fileName;


fileName = PutFileDialog.new("Choose a file name", "untitled").path;
data = [6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1];
data.do({arg eachFloat; fp.putFloat(eachFloat)}); //place each float in file
fp.close;

var fp, data, fileName;


fileName = PutFileDialog.new("Choose a file name", "untitled").path;
data = fp.readAllFloat; //read all as array
data.postln;
fp.close;

I chose to write the integers 65 through 71 because they correspond with ascii characters A
through G. To confirm this, open the TestInt file, which supposedly has only integers, not
characters, with SimpleText, MS Word, or SC to confirm that these programs convert them to
text characters. To confirm that they are also integers (and floats), use the file reading method
described in the previous section to read the values.

Interpreting Strings

Files containing integers and floats still present a problem. It is difficult to manage the data
especially if they are grouped in arrays of arrays, as in the Markov patch. There is really no fast
easy way to check the file to make sure all the data are correct. You can't read it as a text file.
You have to read it as 8, 16, or 32 bit ints, or floating point values. If you get any of the data
types incorrect, or if a value is in the wrong position, the structures and arrays will be off. (Don't
misunderstand; it can be done, it's just a hassle and error prone.)

It would be easier if we could create the files using a text editor or SC, but read them as data
files. Or read them as strings but parse them into the correct data.

For those of you who have worked with C you know that managing and parsing data represents a
large amount of programming. True to form, it is very simple in SC. The message .interpret
translates a string into code that SC will understand. We already know how to read a string from
a file. But strings ("24 + 405") are very different from code (24 + 405). With interpret, we can
read a section of text from a file, then convert it from a string into SC code. Using this
combination of reading strings from files and interpreting them you can save entire functions,
data structures, lists, error messages, and macros in files that can be edited as text with SC or any
text editor, but used as code in an SC patch.

To test this, first open a new window and type these lines which represent an array of arrays,
containing different data types (note that I did not end with a semicolon):

[
[1, 2, 3],
[2.34, 5.12],
["C4", "D4", "E4"],
Array.fill(3, {rrand(1.0, 6.0)})
]

194
I've intentionally filled this array with several different data types, including strings19, to
illustrate how simple this is in SC. If I were managing this array of arrays with a C compiler I
would have to keep close track of each data type, the number of items in each array, the total size
and length of each line, etc. It was a hassle.

Now run this code and notice that while the first .postln shows that the array is indeed a string
when first read from the file, in the subsequent code it is being treated as code. This is possible
because of the line array = array.interpret.
28.8 interpreting a string

var fp, array;


fp = File("arrayfile", "r");
array = fp.readAllString; //read the file into a string
array.postln; //print to confirm it is a string
array = array.interpret; //interpret it and store it again
array.at(0).postln; //confirm it is code not a string
array.at(1).sum.postln;
array.at(2).at(0).postln;
array.at(3).postln;

The difference in managing files as text then using interpret is I can open the arrayfile with SC or
any text editor and modify the data as text. Likewise I can import data from a database such as
FileMaker Pro or Excel as a text file and use that. This is a lot easier to proof. I can type the
arrays in the simple Markov patch into a file using SC, then save it as text, but use it as data
when read into a stand alone Markov patch. Now it is possible to have a Markov patch with
modular files containing data for different probabilities.

19
Note that a string within a string will not work if typed directly into SC because of the duplicate quotation marks.
For example, a = "[[1, 2, 3], ["C4", "D4"]]" is not allowed. The compiler reads the second quote, next to the C4, as
the closing quote. You can get around this using the backslash character: "[[1, 2, 3], [\"C4\", \"D4\"]]". But when
contained in a file and interpreted as it is here, the backslash is unnecessary.

195
29. Sound Files, Music Concrète

29 Assignment

TBD

For me, SC has liberated traditional electro-acoustic composition. I no longer have to settle on a
single performance, recorded on tape, that will play back the same each time. Similar tools exist
for concrète treatment; manipulating recordings of real sound in real time.

For this exercise you must supply your own sound files. I will assume the user has already
recorded segments of audio and saved those files in the Sounds folder. They should be 44.1k
sampling rate, 16 bit files of about 5 to 10 seconds of audio. First we'll open the files and play
them back. The playback engine is PlayBuf. The arguments are the sound file buffer, sample
rate, playback rate, and offset. We will only use these. We are going to do loops, and there are
arguments for loops, but I think these arguments are designed for sustain style loops, not
concrete loops. It seems to me I tried them and they didn't really do what I wanted.

Try changing the sample rate (44100), playback rate (1, try negative values), and offset to
confirm how they affect playback. The variable audio is the buffer containing the entire sound
file, offset moves the actual start time into the signal buffer. Remember that 44100 represents
one second of sound, so changing this value by increments of 44000 will move you that many
seconds into the file.

Notice that there is a small burst of static at the end of the file. This is non-audio data. We will
fix this with the granular envelopes later.
29.1 Playing a soundfile

var audioInst, soundFile, audio, fileName;

soundFile = SoundFile.new; //create a soundfile


fileName = ":Sounds:africa1"; //enter file name
if(soundFile.read(":Sounds:africa1"), //test to see if the file exists
{audio = soundFile.data.at(0)}, //if so, set audio to the beginning
{(fileName ++ " not found.\n").post}); //if not, print error

//The audio playback instrument, using PlayBuf


audioInst = {PlayBuf.ar(audio, 44100, 1, 0)};

Pbind(
\ugenFunc, audioInst,
\dur, 5
).play

Changing playback speed and direction are interesting treatments for concrete studies, but they
have become a bit cliché for me, so we will focus on loops. But don't hesitate adding controls for
speed and direction on your own.

196
To generate loops I use a Spawn UGen. The size of the loops are calculated by first determining
the size of the total signal (audio.size) choosing a begin point for the loop, then determining the
length of the loop as a percentage of the remainder. The beginning loop is somewhere between 0
and 60% (0.6) of the entire file. The loop length, in seconds, is calculated from the remainder
(audio.size – loopBegin –2; the –2 removes the data at the end of the file). This gives us the size
of the remainder in samples. That is multiplied by rrand(0.3, 1.0) and divided by 44100 to give
the time in seconds.

Some may find this method for choosing loop points a little too arbitrary. There are several other
strategies that allow more control. For precise loop points (selected using an audio editor, noting
the sample number where you would like the loop to begin and end), enter those percentages or
sample numbers into an array, as in "loopBegin = signal.size*[0.1, 0.13, 0.25, 0.31].choose."

The loopLength is used not only to set the next spawn event (the next loop), but also the grain
envelope for each loop. The number of repetitions is chosen at random and used as the
maxRepeats argument for Spawn.

I've added an echo and three different amplitude controls. The control is chosen at the beginning
of the patch so that the same control treatment is used with each spawn.
29.2 loops

var audioInst, soundFile, audio, fileName;

soundFile = SoundFile.new; //create a soundfile


fileName = ":Sounds:africa1"; //enter file name
if(soundFile.read(":Sounds:africa1"), //test to see if the file exists
{audio = soundFile.data.at(0)}, //if so, set audio to the beginning
{(fileName ++ " not found.\n").post}); //if not, print error

//The audio playback instrument, using PlayBuf


audioInst = {
var loopBegin, loopEnd, grainEnv, loopLength, totalLoops, control;
control = 3.rand; //Control set here so the same is used with each spawn
loopBegin = audio.size*rrand(0, 0.6); //Choose a begin point and length
loopLength = (audio.size - loopBegin - 2)*rrand(0.3, 1.0)/44100;
totalLoops = [3, 4, 5, 6, 7].choose; //Number of loops
Spawn.ar({arg spawn, event;
var mix, rev, grainEnv;
//next spawn event end - begin divided by sample rate = seconds
spawn.nextTime = loopLength;
grainEnv = EnvGen.kr(Env.linen(0.01, 1, 0.01), timeScale: loopLength);
mix = PlayBuf.ar(audio, 44100, 1, loopBegin, mul: grainEnv);
mix = [//Simple stereo chorus (control 0)
DelayN.ar( mix, 0.02, [0.02, 0.01], //delay times
add: Pan2.ar(mix, rrand(-1.0, 1.0)) //Mix in the orig, panned.
), //A simple pan (control 1)
Pan2.ar( mix, LFNoise2.ar(2, //Speed of pan
mul: rrand(1.2, 2.0) //Amount of exaggeration of the pan.
).softclip //Keeps the values within 1.
), //An envelope generater controlled by an LFPulse
Pan2.ar(mix*EnvGen.kr(
Env.linen(0.01, loopLength*totalLoops, 0.01),
mul: LFPulse.ar(rrand(1.5, 10.0) //Freq, different for each spawn

197
)), 1.0.rand2) //Pan position
].at(control);

rev = CombN.ar( //Level of input (effects overall final volume)


mix*[0, rrand(0.1, 0.4)].choose,
2.0, [rrand(0.3, 1.9), rrand(0.3, 1.9)], //Actual delay
4 //decay time
);

//Mix the results. levelScale is overall volume.


Mix.ar([mix, rev])*
EnvGen.kr(
Env.linen(0.01, loopLength*totalLoops, 0.01), levelScale: 0.3);

}, 2,
maxRepeats: totalLoops
)};

Pbind(
\ugenFunc, audioInst,
\dur, Pfunc({rrand(5.0, 10.0)})
).play;

It doesn't take long for me to lose interest in a single sample, so the next step is to read in a group
of audio files and select them at random. They are all named "africa" followed by a number. This
simplifies the loading process. The variable audio is replaced with audioArray. (I could have left
the name the same, but this clarifies the code). And all the files are read into the array. From
there it is a matter of choosing which clip to use and modifying the code to include a .at message.

Here are the lines where the soundfiles are loaded. I'm going to add one more treatment before
printing the entire patch again.
29.3 sound file array

fileNames = Array.fill(7, {arg i; ":Sounds:africa" ++ i.asString});


audioArray = Array.fill(7, {0});
fileNames.size.do({arg eachFile;
var thisSound;
thisSound = SoundFile.new;
if (thisSound.read(fileNames.at(eachFile)), {
audioArray.put(eachFile, thisSound.data.at(0));
},{ (fileNames.at(i) ++ " not found.\n").post; nil });
});

In the final example I add a loop extension. The loopLength is first calculated as a percentage.
The extension is then the remainder percentage divided by number of repeats. Then loopLength
is calculated as before. At the end of each spawn loopLength is increased by the extension value.
The loopLength has been adjusted to smaller values (0.1 to 6.0) to accommodate the extension.
The extension is either 0 (no extension) or the calculation for the remaining percentage. It is
weighted using windex. (Remember that windex returns an index number, so it must be used
with .at.) I think if I listened to the patch for any length of time I would probably want to add
more interesting controls and weighted controls.

198
29.4 concrete study

var audioInst, soundFile, audioArray, fileNames;

fileNames = Array.fill(7, {arg i; ":Sounds:africa" ++ i.asString});


audioArray = Array.fill(7, {0});
fileNames.size.do({arg eachFile;
var thisSound;
thisSound = SoundFile.new;
if (thisSound.read(fileNames.at(eachFile)), {
audioArray.put(eachFile, thisSound.data.at(0));
},{ (fileNames.at(i) ++ " not found.\n").post; nil });
});

//The audio playback instrument, using PlayBuf


audioInst = {
var loopBegin, grainEnv, loopLength, totalLoops, control, source;
var extension;
source = audioArray.size.rand; //
control = 3.rand; //Control set here so the same is used at each spawn
totalLoops = [3, 4, 5].choose; //Number of loops
loopBegin = audioArray.at(source).size*rrand(0, 0.2); //Choose begin
loopLength = rrand(0.2, 0.6);
extension = [(0.9 - loopLength)/totalLoops, 0].at(windex([0.7, 0.3]));
loopLength = (audioArray.at(source).size - loopBegin - 2)*loopLength/44100;
"//".post;
"source, control, loopBegin, loopLength, extension, totalLoops".postln;
"//".post; [fileNames.at(source),
["chorus", "pan", "pulse"].at(control),
loopBegin/44100, loopLength, extension, totalLoops].postln;
Spawn.ar({arg spawn, event;
var mix, rev, grainEnv;
//next spawn event end - begin divided by sample rate = seconds
spawn.nextTime = loopLength;
grainEnv = EnvGen.kr(Env.linen(0.01, 1, 0.01), timeScale: loopLength);
mix = PlayBuf.ar(audioArray.at(source), 4
4100, 1, loopBegin, mul: grainEnv);
mix = [//Simple stereo chorus (control 0)
DelayN.ar( mix, 0.02, [0.02, 0.01], //delay times
add: Pan2.ar(mix, rrand(-1.0, 1.0)) //orig signal, panned.
), //A simple pan (control 1)
Pan2.ar( mix, LFNoise2.ar(2, //Speed of pan
mul: rrand(1.2, 2.0) //Amount of exaggeration of the pan.
).softclip //Keeps the values within 1.
), //An envelope generater controlled by an LFPulse
Pan2.ar(mix*EnvGen.kr(
Env.linen(0.01, loopLength*totalLoops, 0.01),
mul: LFPulse.ar(rrand(1.5, 10.0) //Freq, different for each spawn
)), 1.0.rand2) //Pan position
].at(control);
loopLength = loopLength + (loopLength*extension);
rev = CombN.ar(
mix*[0, rrand(0.1, 0.4)].choose, //Level of input
2.0, [rrand(0.3, 1.9), rrand(0.3, 1.9)], //Actual delay
4 //decay time
);

//Mix the results. levelScale is overall volume.

199
Mix.ar([mix, rev])*EnvGen.kr(
Env.linen(0.01, loopLength*totalLoops, 0.01),
levelScale: 0.3);
}, 2,
maxRepeats: totalLoops
)};

Pbind(
\ugenFunc, audioInst,
\dur, Pfunc({rrand(5.0, 10.0)})
).play;

200
30. Tuning Systems

30 Assignment

TBA

In this text I will only demonstrate how to use SC to realize ideas in other tuning systems. For a
discussion of tuning, refer to the Harvard Dictionary of Music, the entry for Interval.

The capacity for SC to realize complex tuning systems is virtually unlimited.

future sections

1/f noise

(External controls—mouse, pad, video feed)

(External controls, live audio)

201
APPENDIX

A. Distribution using SCPlay

One of the issues raised by generative composition, and electro-acoustic music in general, is the
format for performance. I've always been a little uncomfortable with the concert setting where
the lights are dimmed and taped music is played. It seems contrived. Why are we all listening at
once to something we could experience individually at any time?

I believe electro-acoustic music presents an opportunity to redefine the way we experience


music. As with most of the comparisons I've made with traditional music, SC shows strengths on
both ends of the spectrum. It can be used in a concert hall and presents much less hassle than
most performances in electro-acoustic media. Yet it can also be used in installations, where
music is constantly played as part of the total experience, it can be displayed as one would works
of art in a museum (computer stations which allow the concert audience the ability to roam and
admire each work at their leisure). But the most fascinating prospect to me is publishing
generative CDs (that play different music each time you put them in), or publishing your work
on the web. The authors of SC have included tools for this type of publication; SCPlay, main.sc,
Commons folder, and a default library.

You might be able to come up with a scheme to do this with the tools you already have. You
could send out over the net or through email a copy of SuperCollider with all its library files (the
files it reads and compiles at startup) along with the files containing patches that you've created.
There are three problems with this method. First, the user doesn't have a registration number for
SC so they won't be able to use it for long. The second is that there are a lot of files that
complicate handling SC (i.e. they have to be in the right position in relation to SC). They make
the total size more cumbersome to handle (with SC a total of about 2.6 Meg on an HFS+ format,
close to 6 Meg on standard format, depending on hard drive size). The third problem is that you
may have files and patches that you want to protect from being copied. If you just send out your
files then someone else can use them without your permission.

The solution to this is two fold: First you can build your patches into the SC program (that is
include them in the Lib menu) and second, you can use a compressed library and the program
SCPlay. SCPlay is a partially disabled SC program that only plays files. It is intended for
distribution without registration. SCPlay knows how to read a compressed library, which
contains all the files in the folder Common and the file Main.sc, but in a compressed format. This
is less confusing for the user, smaller for transfer, and protects your files from being copied. You
can then distribute your default library and the SCPlay (or just tell people to download the
SCPlay on their own). The compressed library (named Default.lib) is only about 144K; a much
more manageable file size. You can distribute any number of libraries and name them anything
you want. SC just looks for Default.lib when it first launches but you can open another library
file from within the SC program or by double clicking on the library using the finder.

202
The compressed libraries then become the medium, similar to a music CD, but one that will
playback only on a PPC Macintosh. You can also burn a CD that has the SCPlay program and
the default libraries. Bingo; a CD that generates different music every time you play it.

Here is how it works: When SC first launches it runs a file called "Main.sc" which is located in
the DefaultLibrary folder, which must reside in the same folder as the SC application. It also
reads and compiles files that reside in the Common folder (usually an alias to a Common folder)
which holds the main library of files that define all the functions it understands. The functions
that come with the program (such as Synth.scope) are loaded at that time.

It is possible to switch libraries and recompile SC. If the new library has different functions or
code SC will then work according to the instructions contained in the new library. This feature is
both powerful and dangerous. Basically you can rewrite the building blocks that make the
program run (a pretty rare capability in programs these days). The danger of course, is that you
can break things easily (although the worst thing that can happen is you get an error). In that case
you can always revert to the original files and recompile.

Here are some guidelines for modifying the source code: Make a copy of the originals and
modify that copy. Modify existing code rather than building code from scratch. For example, if
the original Main.sc has this:

Library.put(['Tones','sine'], Synth.scope({SinOsc.ar(200, mul: 0.5)});

Then you could modify it to this:

Library.put(['Noise','pink'], Synth.scope({PinkNoise.ar(0.5)});

Inside of the Common folder are library folders such as Audio folder, which has files such as
Synth.sc, SinOsc.sc, Pan.sc, etc. These contain the code for the Synth object, SinOsc object, Pan
objects and so on. You can modify these files and recompile, changing the way these objects
function. For example, you might want to change the default values of some of these files. The
SinOsc, for example, has a default multiply of 1.0. This is often too loud for me, so I might want
to set the default to 0.4. That way if I write just SinOsc.ar(200) in code the default mul will be
0.4, and I don't have to type that line. But here is where you can really do some damage, so I
wouldn't advise changing much here unless you are pretty comfortable with smalltalk. You can,
however, modify Main.sc without doing much harm.

Changing Libraries, Editing Main.sc, Recompiling, Compressing

The first thing you should try is changing one of the libraries and recompiling. To do this, first
copy the folder DefaultLibrary and rename it TestLibrary. Open and make a copy of String.sc (in
the Collections folder, which is in the Common folder) and place the copy in the folder
TestLibrary. Do not rename it. Open it, and find the line below.

error { "ERROR:\n".post; this.postln; }

And change it to this

203
error { "ERROR: (hello world)\n".post; this.postln; }

then save the file (don't close it yet).

Now in SuperCollider open a new document. (Do this so that you can see the messages printed to
the screen when it goes through the compile.) Select the menu item "Choose New Library" from
the Lang menu. Find the newly created folder ("TestLibrary"), open that folder and hit the button
"Select TestLibrary." You will see messages come up on the screen that indicate SC has selected
this new library and compiled it. Now try typing and running this:

5.bongo

Since "bongo" is not a recognized command you will get an error message but it should now
include your new addition "hello world."

You can continue to make changes to this library. You don't need to choose the library again, SC
will continue to compile with this new library until you indicate another library. From here on
out it is just a matter of changing files, saving them, and recompiling. Go to the file "String.sc"
again and add this line just below the error line you just modified:

error { "ERROR: (hi mom)\n".post; this.postln; }


mypost { "Message for you sir: ".post; this.postln; }

Save the file. (Don't close it, just save.) Switch back to the blank file you opened before then
recompile. (This is because you are going to get a load of messages printed to the screen and you
don't want them dumped into the String.sc file.) Choose Recompile Library from the Lang menu
(or hit command-k). After it has compiled (and assuming you didn't get any errors—if you did
then fix it and recompile) you can use your newly created function the same way you would any
SC code. Write and run this line:

"hi mom!".mypost;

This should return "Message for you sir: hi mom!"

But as I said earlier, I don't think you will be doing much modification of the source code. Our
goal is to change the file Main.sc. The Main.sc that comes with SC and loads when you launch
SC is pretty dense, and it has a lot of examples by the author, so I've stripped down our Main.sc
to include just a few examples.

One of the things that Main.sc can do when it runs is to add menu items to the Lib menu. This is
pretty straight-forward: it uses the object and message "Library.put" with two arguments; the
menu name, and the function to be executed.
30.1 Library.put

Library.put(['Sine Tone'],
{
Synth.play({SinOsc.ar(1000, mul: 0.5)})
});

204
This code will place a new menu item named "Sine Tone" in the Lib menu and will execute the
Synth.play line when it is chosen. If you want a submenu use two symbols in the array: ['Sine
tones', '200hz'] etc. Actually, you don't have to put this code in the Main.sc file to get it to work.
Try typing it in a blank file, run it (it will seem like nothing happens), then click on the Lib menu
and see that the item has indeed been added to the Lib menu.

You may have tried running some of the patches that load normally when you launch SC and
you see a "mixer" that allows you to add or remove sounds. I've found this to be unreliable in my
own projects. If I choose the wrong things in the menu or if I use Pseq or Pfunc, as we have
been, then I get pretty serious crashes, so I've abandoned it until the author addresses these bugs.

The other code in the Main.sc that you can use is the "run" method. Anything you put in this
function will be executed when you choose "Run Main" from the Lang menu, or when you hit
command-r. If you only have one patch you want to distribute with SCPlay (as in the case of a
concert where you just want to start SC and hit "play" so to speak) then this is useful.

Pretty much anything you create in your experiments can be added to Main.sc using the
Library.put method and specifying the name you want it to be, then adding the code as the
second argument.

After I add these menu items and the patches I usually do a lot of tweaking. Since I don't have to
choose a new library (it will always compile using the current library) each time, I can just hit
compile each time I want to try the latest version. I can usually have the file I'm working on (e.g.
Main.sc) open, and a "post" file open for messages when I compile. Then I rotate through these
steps: Bring Main.sc to front, make changes, hit command-s to save, bring "post" window to
front and hit command-k (recompile), check for errors in the compile messages, then test SC to
see if the changes took, return to Main.sc, repeat until you get everything the way you want.

Compressed Libraries

When you do finally get everything the way you want, you are ready to compress the library.
Choose "Compress Library" from the Lang menu and save the file. You can name it anything
you like, but Default.lib is a good choice if you are doing just one file. When you launch SCPlay
it automatically looks for Default.lib.

Once you have saved the .lib file you can open it using SC, SCPlay, or simply double click on
the library after having opened SC or SCPlay. Now you can distribute the compressed library file
along with SCPlay, or just distribute the file and tell the receiver where to find SCPlay.
Everything you put in the Main.sc will appear when the compressed library is opened. The user
can either hit command-r to run your patch, or choose patches from the Lib menu. You can even
change the message that is printed to the screen when SC first starts up. Such a message might
contain instructions about how to use the Lib menu, how to stop playback, and some program
notes about your piece.

The last step to a generative CD is to burn the CD. All you need is the default library and SCPlay
either on the desktop file of the CD, or inside a folder titled e.g. "My Collection." You could
include other default libraries and name them accordingly; "Markov chains.lib", "Classic

205
Synthesis.lib", "Real Time Dsp.lib", etc. The files will only take up about 4M max, but will
represent billions of hours of music.

There is only one serious bug: If you choose a new item before you hit command-period the
machine locks up but the sound manager continues to play a really irritating (and possibly
harmful) sound. The only solution I've been able to come up with is to print warnings in Main.sc
when it first runs, and to put up a GUI box that tells the user to stop playback using command-
period. (We haven't talked much about GUI. There is really a lot you can do with them, but it’s a
rather complex topic and not that useful in this class.) Examples using more complicated patches
and a GUI window are in Lib2, which is in the group folder.

206
B. Patches for practice

Patch I: Latch or Sample and Hold

First we set up an oscillator with no controls.


(
Synth.scope({
SinOsc.ar(
freq: 440,
mul: 0.5
);
})
)

Replace the static 440 with an LFSaw. Try changing each of the values in bold to see what effect
it has on the sound. Remember that the add should be higher than the mul (to avoid negative
values). Try using a MouseX.kr(minVal, maxVal) for each of the bold numbers to try a range of
values. Even the patches that I build from scratch get layered so much that I can't always
remember which argument controls which aspect of the sound. Substituting a MouseX control
allows me to quickly review how a particular value changes the sound.
(
Synth.scope({
SinOsc.ar(
freq: LFSaw.ar(freq: 1, mul: 200, add: 600), //Saw controlled freq
mul: 0.5
);
})
)

Now I place the LFSaw inside a Latch, which samples the wave and holds it at that value until
another sample is taken. The frequency range is still the same, but the values are no longer
contiuous, but discrete. Ten times per second the Latch samples the LFSaw and holds on until a
new value is sampled.
(
Synth.scope({
SinOsc.ar(
freq: Latch.ar( //Using a latch to sample the LFSaw
LFSaw.ar(1, 200, 600), //Input wave
Impulse.ar(10) //Trigger (rate of sample)
),
mul: 0.5
);
})
)

Same patch using blip. The second argument in blip is the number of harmonics, which is in
bold. Try different values for harmonics. Try changing the 1.1 to 1. Why is this less interesting?
It's because the frequency of the trigger (Impulse) is an exact multiple of the frequency of the

207
wave, and you get the same samples every time. Try a MouseX in place of the 1.1 to see how the
pattens change as that value changes.
(
Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr(1.1, 500, 700), //Input for Latch
Impulse.kr(10)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
)
})
)

The frequeny of the LFSaw created interesting patterns when it was set to anything between 1.1
and 10. Here is an example with a Line.kr controlling the freq of the sampled wave. It begins at
0.01 and then moves to 10 over 100 seconds. What other controls could you use besides a
Line.kr? LFNoise? LFSaw? SinOsc?
(
Synth.scope({
Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( //input for Latch
Line.kr(0.01, 10, 100), //Freq of input wave, was 1.1
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
)
})
)

In this version I've assigned the entire blip section of code to a variable. Then at the bottom I use
a line of code that we will explain later. It adds reverb. I then put the variable "signal" as the last
line of the function. The last line of a function is returned to the synth.
(
Synth.scope({
var signal;
signal = Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18, //Freq of input wave (Golden Mean)
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal //return the variable signal
})
)

208
Now that I have Blip instrument assigned to signal, I can insert it into a Pan2 Ugen. Each time I
make signal equal to a new version of itself after passing it through some method of processing.
(
Synth.scope({
var signal;
signal = Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18, //Freq of input wave
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
3, //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
);
signal = Pan2.ar(
signal, //input for the pan,
LFNoise1.kr(1) //Pan position. -1 and 1, of 1 time per second
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal //return the variable signal
})
)

Remember the number of harmonics? In this section I replace the static value of 3 with a UGen
LFNoise to move between 1 and 27 (mul of 13, so -13 to 13, add 14, 1 to 27). Try changing the
0.3, 13 and 14. The add (14) should always be higher than mul (13) to avoid negative values.
(
Synth.scope({
var signal;
signal = Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18, //Freq of input wave
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
);
signal = Pan2.ar(
signal, //input for the pan
LFNoise1.kr(1) //Pan position.
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal //return the variable signal
})
)

A while back we studied envelopes. In this example I've added an envelope and multiplied the
entire output (the variable signal) by the envelope. Try changing the attack and decay. Now
rather than a continuous sound it is an event with an attack and decay. Later we will place it in a
Pbind for multiple events.
(
Synth.scope({

209
var signal, env1;
env1 = Env.perc(
0.001, //attack of envelope
2.0 //decay of envelope
);
signal = Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18, //Freq of input wave
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
);
signal = Pan2.ar(
signal, //input for the pan
LFNoise1.kr(1) //Pan position.
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal*EnvGen.kr(env1) //return the variable signal
})
)

Now I take the entire instrument and assign it to the variable "inst", and use it in a Pbind to play.
By this I mean that everything within the outer most braces (this is the ugenFunc, or first
argument of the .scope message) is extracted from the Synth.scope and is stored in the variable
inst below. Try changing the \dur argument in the Pbind. There is a discussion of Pbind in a
separate example.
(
var inst;

inst = {
var signal, env1;
env1 = Env.perc(
0.001, //attack of envelope
2.0 //decay of envelope
);
signal = Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18, //Freq of input wave
300, 500), //Mul. and Add for input wave
Impulse.kr(10)), //Sample trigger rate
LFNoise1.kr(0.3, 13, 14), //Number of harmonics in Blip
mul: 0.3 //Volume of Blip
);
signal = Pan2.ar(
signal, //input for the pan
LFNoise1.kr(1) //Pan position
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal*EnvGen.kr(env1) //return the variable signal
};

Pbind(
\dur, 0.3,

210
\ugenFunc, inst
).play;

Finally I replace all the stattic values with random and dynamic values.
(
var inst;

inst = {

var signal, env1, env2, env3, mulValue, numHarmonics;


mulValue = 100 + 600.rand;
numHarmonics = 1 + 13.rand;
env1 = Env.perc(0.001, 3);
env2 = Env.perc(3.0.rand, 3.0.rand);
env3 = Env.perc(3, 0.0001);
signal = Pan2.ar(Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18.rand, //Freq of input wave
mulValue, 200 + mulValue), //Mul. and Add for input wave
Impulse.kr(3 + 20.rand)), //Sample trigger rate
//Number of harmonics in Blip
1 + LFNoise1.kr(1.0.rand, numHarmonics, 1 + numHarmonics),
mul: 1.0 //Volume of Blip
), Line.kr(1.0.rand2, 1.0.rand2, 1 + 1.0.rand) // pan position
);
//reverb
2.do({ signal = AllpassN.ar(signal, 0.05, [0.05.rand, 0.05.rand], 4) });
signal*EnvGen.kr([env1, env2, env3].choose)*0.5 //return signal
};

Pbind(
\dur, [0, 1.0, 2.0, 2.5, 5].choose,
\ugenFunc, inst
).scope;

Patch II; Pulse

In this patch I will start with a variable to store the instrument. I declare the variable (out), and
make it equal to the Ugen Pulse.ar. For frequency try some very low values, like 30 to 100. But
the most interesting aspect of a Pulse is the pulse width. Try values between 0.1 and 0.9. I would
suggest you try changing the pulse width but not the frequency (i.e. don't do both at once).
(
Synth.scope({
var out;
out = Pulse.ar(
200, //Frequency.
0.5, //Pulse width. Change with MouseX
0.5
);
out

211
})
)

Next I will use an LFNoise1 to control the frequency of the Pulse. Try changing, or substituting
any of the arguments with a mouse or other static values to see how that value changes the
sound. Remember add must be greater than mul (in this case). Be sure that you don't add a
control that will take the mul value higher than add. For example, if you substituted the static 20
with a SinOsc.kr(2, 30, 40), on the surface it looks like all your values would be ok because they
don't exceed 60 (the mul is 30). But you would be actually generating values between 10 and 70,
because of the add. The return of the SinOsc.kr, if used for the mul argument in LFNoise1,
would at some point regurn a value of 70. Since that is the mul of the LFNoise, then LFNoise
would be generating values between 70 and -70. With an add of 60 in the LFNoise you could get
get a final result of -10 (60 -70).
(
Synth.scope({
var out;
out = Pulse.ar(
LFNoise1.kr(
0.1, //Freq of LFNoise change
20, //mul = (-20, to 20)
60 //add = (40, 80)
),
0.5, 0.5);
out
})
)

To save space I'm going to compact each section of code that has previously been explained. In
the next example the LFNoise1 is reduced to a single line. I've then added a SinOsc control for
the second argument of pulse, or the width of the pulse wave. Values for the pulse width should
be between 0.01 and 0.99, so the mul and add of the SinOsc need to be changed to reflect this. So
the mul is 0.45, which returns -.45 to .45. Then the add is .46, bringing -.45 to .01 and .45 to .91.
Try different controls instead of the SinOsc. Maybe an LFPulse, or an LFNoise0. Also try faster
frequencies.
(
Synth.scope({
var out;
out = Pulse.ar(
LFNoise1.kr(0.1, 20, 60),
SinOsc.kr(
0.2, //Freq of SinOsc control
mul: 0.45,
add: 0.46
),
0.5);
out
})
)

Next I do a very simple but effective trick. If I use an array of values for any one of the
arguments inside a Ugen then SC splits the entire patch into stereo pairs using the first value in

212
the array for the left channel and the second value for the right channel. If I split any additional
argument it will be matched accordingly: the first to the left channel, the second to the right. All
values that are not split using arrays will be duplicated as is. In this example I split the LFNoise
frequency. But try splitting others too. I'll let you discover what happens if you split the mul and
add for LFNoise1. (Now set to static values of 20 and 60). But don't set either of the mul values
to anything higher than the add. Multichannel expansion using arrays is one of my favorite
features in SC. It's a quick and simple way to get a richer sound.
(
Synth.scope({
var out;
out = Pulse.ar(
LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.45, add: 0.46),
0.5);
out
})
)

Next I run the out variable through an AllpassN delay which was copied from a patch in the
examples folder. This is another secret I can let you in on. When you are working with code
rather than synthesizer modules (e.g. a DX7 or Korg) you can often take sections from one patch
and insert them into your patch, even though you don't know exactly what each parameter in that
section of code does. You can use them like mini effects units. I've compacted all the lines
above. That first line "out = . . ." looks pretty dense, but can you see what values control which
aspects of the sound now that we've worked through them one by one?
(
Synth.scope({
var out;
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.45, add: 0.46),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out
})
)

Now that I listen to it. I like smaller values for the pulse width:
(
Synth.scope({
var out;
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out
})
)

Add an envelope with an attack and decay choosen from two possibilities, and the sustain chosen
as a random value between 0 and 2.0. Run it several times to see that the envelope changes.
(
Synth.scope({

213
var out, env;
env = Env.linen([0.0001, 1.0].choose, 2.0.rand, [0.0001, 1.0].choose);
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out*EnvGen.kr(env)
})
)

Now that I have an envelope describing the shape of the event in real time I need to trigger it
automatically. To do this I'm going to set the ugenFunc above (everything between the { and })
to a variable i1 and place that variable in a Pbind. The Pbind generates events using an
environment. If you don't supply any parameter of the environment a default is used. In this
example I'll supply the ugenFunc (instrument) and the duration. The syntax is \symbol, value,
etc. For duration lets start with just 1. The duration is a little misleading. It isn't the duration of
the instrument, but the duration until the next event. If the instrument has an envelope of 2
seconds, but \dur is set to one second, then the events will be one second apart and each will last
2 seconds (thus overlapping).
(
var i1;
i1 = {
var out, env;
env = Env.linen([0.0001, 1.0].choose, 2.0.rand, [0.0001, 1.0].choose);
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out*EnvGen.kr(env)
};

Pbind(
\dur, 1,
\ugenFunc, i1
).scope;
)

Next I'll add the instrument from Patch1


(
var i1, i2;

i1 = {
var out, env;
env = Env.linen([0.0001, 1.0].choose, 2.0.rand, [0.0001, 1.0].choose);
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 20, 60),
SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out*EnvGen.kr(env)
};

i2 = {

var out, env1, env2, env3, mulValue, numHarmonics;


mulValue = 100 + 600.rand;
numHarmonics = 1 + 13.rand;

214
env1 = Env.perc(0.001, 3);
env2 = Env.perc(3.0.rand, 3.0.rand);
env3 = Env.perc(3, 0.0001);
out = Pan2.ar(Blip.ar( //Audio Ugen
Latch.kr( //Freq control Ugen
LFSaw.kr( 6.18.rand, //Freq of input wave
mulValue, 200 + mulValue), //Mul. and Add for input wave
Impulse.kr(3 + 20.rand)), //Sample trigger rate
1 + LFNoise1.kr(1.0.rand, numHarmonics, 1 + numHarmonics),
mul: 1.0 //Volume of Blip
), Line.kr(1.0.rand2, 1.0.rand2, 1 + 1.0.rand) // pan position
);
//reverb
2.do({ out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4) });
out*EnvGen.kr([env1, env2, env3].choose)*0.5 //return the variable out
};
Pbind(
\dur, Prand([0, 1.0, 2.0, 2.5, 5], inf),
\ugenFunc, Prand([i1, i2], inf)
).scope;
)

I've also used a "Prand" for both dur and ugenfunc. It's explained a little more in depth later. For
now just understand that Prand chooses values at random from an array of values. The second
argument for Prand is number of repeats. In these examples I use "inf", which means infinite
repeast. This patch would go forever if we let it. How do we get it to stop on its own? To do a set
number of times, just enter that number for the second argument in Prand (instead of inf), e.g. 20.
Count the events to confirm that there are only 20 events.
(
var i1, i2;

[same as above]

Pbind(
\dur, Prand([0, 1.0, 2.0, 2.5, 5], 20),
\ugenFunc, Prand([i1, i2], inf)
).scope;

How do I build more controlled structural choices? By using Pseq, which steps through a set of
values in an array. I'm also going to duplicate i1 and simply change the range of the frequency. I
can then use the Pseq to give a feeling of "modulating" near the middle of the piece, using i3,
which has a little higher sound than i1, as a secondary tonal area.
(
var i1, i2, i3;

i1 = [same as above]

i2 = [same as above]

i3 = {
var out, env;

215
env = Env.linen([0.0001, 1.0].choose, 2.0.rand, [0.0001, 1.0].choose);
out = Pulse.ar(LFNoise1.kr([0.1, 0.12], 60, 200),
SinOsc.kr( 0.2, mul: 0.05, add: 0.051),0.5);
2.do({out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out*EnvGen.kr(env)
};

Pbind(
\dur, Prand([0, 1.0, 2.0, 2.5, 5], 30),
//first section uses only i1 and i2, 10 events
\ugenFunc, Pseq([i1, i1, i1, i2, i1, i1, i2, i2, i1, i2,
i3, i3, i2, i3, i2, i2, //i3 is introduced, 6 events
//return of i1 mixed, 15 events
i1, i1, i1, i2, i2, i2, i2, i3, i2, i3, i3, i1, i1, i1
], inf)
).scope;

There's an even slicker way to do it. You can nest Prand and Pseq statements thus:
(

var i1, i2, i3;

[same as above]

Pbind(
\dur, Prand([0, 1.0, 2.0, 2.5, 5], 30),
\ugenFunc, Pseq([ //overall Pseq
Prand([i1, i2], 5), //first item in overall Pseq
Pseq([i3], 5), //second item
Prand([i1, i2, i3], 10), //third, etc.
Prand([i2, i3], 5),
Pseq([i2], 5)
], inf) //repeat the entire pattern inf
).scope;

Patch III FM

This patch illustrates FM modulation.

The basic patch begins with a SinOsc, and a second SinOsc to control the frequency. The first
example has a low speed and amplitude as an illustration, then we will increase the values into
the FM range. The speed or frequency of the control SinOsc is 5, the amplitude, or mul of the
control SinOsc is 10. The add is 800. So the control SinOsc will move between 790 and 810 five
times per second. These values are used by the audio SinOsc for freq. The second mul (0.3) is
the amplitude of the audio SinOsc. Change each of the values and note the resulting change in
the audio.
(
Synth.scope({
var out;

216
out = SinOsc.ar(
SinOsc.ar( //control Osc
5, //freq of control
mul: 10, //amp of contrul
add: 800), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
})
)

When you increase the frequency of the control SinOsc to values that are also in the audio range
(above 60), additional frequencies appear as side bands. Try using MouseX.kr to move the freq
from 5 to 240. As you move to the right you will hear two quieter frequencies, one that moves up
and one that moves down as you move the mouse to the right. These are the sidebands. (The
terms "carrier" and "modulator" have been used to describe these two oscillators. The outer
SinOsc is the carrier, the inner SinOsc is the modulator. But the add of the inner SinOsc is the
carrier frequency, and the frequency of the inner SinOsc is the modulator frequency.)
(
Synth.scope({
var out;
out = SinOsc.ar(
SinOsc.ar( //control Osc
MouseX.kr(5, 240), //freq of control
mul: 10, //amp of contrul
add: 800), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
})
)

In this example I'll pick a value for the freq that is high enough to produce a sideband, but now
the MouseX will control the amplitude of the control SinOsc. The frequencies of the side bands
will remain the same but the number of sidebands will increase.
(
Synth.scope({
var out;
out = SinOsc.ar(
SinOsc.ar( //control Osc
131, //freq of control
mul: MouseX.kr(10, 700), //amp of contrul
add: 800), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
})
)

This example shows both freq and mul control by mouse. Freq by MouseX, and amp by
MouseY.
(

217
Synth.scope({
var out;
out = SinOsc.ar(
SinOsc.ar( //control Osc
MouseY.kr(10, 230), //freq of control
mul: MouseX.kr(10, 700), //amp of contrul
add: 800), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
})
)

You can control the add also, but it is important that the add is always greater than the highest
possible value of the mul, so that you don't get negative values. One way to insure this is to use a
variable for mul, then that variable + 100 for the add. This will insure that the add is always 100
higher than mul. But try values other than 100.
(
Synth.scope({
var out, mulControl;
mulControl = MouseX.kr(10, 700);
out = SinOsc.ar(
SinOsc.ar( //control Osc
MouseY.kr(10, 230), //freq of control
mul: mulControl, //amp of control
add: mulControl + 100), //add will be 100 greater than mulControl
mul: 0.3 //amp of audio SinOsc
);
out
})
)

There are a lot of interesting sounds outside of the 100. I'd say you could use values between 100
and 1000. Do you think we could add a Ugen to control that value too? Sure, but first lets do
some automatic controls for the other values. I'll substitute the mouse controls with LFNoise1,
but feel free to try Line.kr, LFNoise0, SinOsc, or LFSaw for other effects. I'll let you work out
the math of the add and mul of each LFNoise
(
Synth.scope({
var out, mulControl;
mulControl = LFNoise1.kr(0.2, 300, 600); //store control in variable
out = SinOsc.ar(
SinOsc.ar( //control Osc
LFNoise1.kr(0.4, 120, 130), //freq of control
mul: mulControl, //amp of contrul
add: mulControl + 100), //add will be 100 greater than mulControl
mul: 0.3 //amp of audio SinOsc
);
out
})
)

218
Now I'll add a control for the 100 added to the mulControl. I want it do values between 100 and
1100, so mul is 500, add is 600.
(
Synth.scope({
var out, mulControl;
mulControl = LFNoise1.kr(0.2, 300, 600);
out = SinOsc.ar(
SinOsc.ar( //control Osc
LFNoise1.kr(0.4, 120, 130), //freq of control
mul: mulControl, //amp of contrul
add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
},
0.03 //Size of window
)
)

Here I use multi-channel expansion using an array; splitting one value into an array to create
stereo signals.
(
Synth.scope({
var out, mulControl;
mulControl = LFNoise1.kr([0.2, 0.5], 300, 600);
out = SinOsc.ar(
SinOsc.ar( //control Osc
LFNoise1.kr(0.4, 120, 130), //freq of control
mul: mulControl, //amp of contrul
add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control
mul: 0.3 //amp of audio SinOsc
);
out
},
0.03 //Size of window
)
)

Run it through a reverb and envelope. For this envelope I'm going to use very short attacks, short
decays, and a short sustain.
(
Synth.scope({
var out, mulControl, env, effectEnv;
effectEnv = Env.perc(0.001, 3);
env = Env.linen(0.01.rand, 0.3.rand, 0.01.rand);
mulControl = LFNoise1.kr([0.2, 0.5], 300, 600);
out = SinOsc.ar(
SinOsc.ar( //control Osc
LFNoise1.kr(0.4, 120, 130), //freq of control
mul: mulControl, //amp of contrul
add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control
mul: 0.3 //amp of audio SinOsc
);

219
out*EnvGen.kr(env);
// 2.do({ out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
},
0.03 //Size of window
)
)

Place it into a Pbind. The duration (next event) is a choice between very short values.
(
var i1;

i1 = {

var out, mulControl, env;


env = Env.linen(0.001 + 0.01.rand, 0.001 + 0.3.rand, 0.001 + 0.01.rand);
mulControl = LFNoise1.kr([0.2, 0.5], 300, 600);
out = SinOsc.ar(
SinOsc.ar( //control Osc
LFNoise1.kr(0.4, 120, 130), //freq of control
mul: mulControl, //amp of contrul
add: mulControl + LFNoise1.kr(0.1, 500, 600)), //add of control
mul: 0.1 //amp of audio SinOsc
);
2.do({ out = AllpassN.ar(out, 0.05, [0.05.rand, 0.05.rand], 4)});
out*EnvGen.kr(env)
};

Pbind(
\dur, Prand([0, 0.1, 0.2, 0.3], inf),
\ugenFunc, i1
).scope
)

Patch IV Sequencer

First a simple Synth with a SinOsc. For the frequency argument I've inserted a sequencer, which
an array of values and moves to each new value in the array at the rate of a trigger. For the array
I use a series of frequencies and an Impulse.kr as a trigger. Try changing the frequencies in the
array and the frequency of the trigger.

Synth.scope({
SinOsc.ar( //Audio source: sinc oscillator
Sequencer.kr( //frequency control for SinOsc; a sequencer
`[300, 454, 346, 768, 234, 988], //Array of pitches to be sequenced
Impulse.kr(8) //trigger for impulse: 8 times per second
),
mul: 0.5 //volume of SinOsc
)
})

Next we change the frequency array to an array of midi values, which are easier to calculate. To
convert them into actual frequencies I enclose it in a parenthesis and use the midicps message to
convert them into frequencies.

220
(
Synth.scope({
SinOsc.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
`[60, 67, 65, 54, 66, 69], //Array of pitches to be sequenced
Impulse.kr(8) //trigger for impulse: 8 times per second
)).midicps,
mul: 0.5 //volume of SinOsc
)
})
)

The point behind a sequencer is to repeat the same values over and over. If I wanted to do an
ascending scale I could type in all the values for the midi pitches, but it would be faster to fill the
array automatically. I can do this using Array.fill. The first argument for .fill is the number of
items in the array, the second is a function of how to generate the numbers being stored in the
array. So I declare a variable midiPitch, and in the function I say that midiPitch is equal to it,
plus 2 (or add 2 to the previous value of of midiPitch). Then "return" that value. By return, I
mean send that value to the next position of the array. Lets do an example with just the array.
Run it several times to see how it fills the array with values. Change the bold values to see how it
changes how the array is filled.
var pitchArray, midiPitch;
midiPitch = 0;
pitchArray = Array.fill(
5, //5 elements in the array
{ //use this function to generate values
midiPitch = midiPitch + 2; //add 2 to previous midiPitch
midiPitch //"return" midiPitch to the array
}
);
pitchArray.postln; //print the results

When placed in the patch it becomes a whole tone scale. But I don't want to start with midi pitch
0, which would be too low, but rather something like 48.
(
Synth.scope({
var pitchArray, midiPitch;
midiPitch = 48;
pitchArray = Array.fill(
12, //12 elements in the array
{ //use this function to generate values
midiPitch = midiPitch + 2; //add 2 to previous midiPitch
midiPitch //"return" midiPitch to the array
}
);
SinOsc.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
`pitchArray, //Array of pitches to be sequenced
Impulse.kr(8) //trigger for impulse: 8 times per second
)).midicps,
mul: 0.5 //volume of SinOsc
)
})

221
)

Not very interesting. I don't care much for whole tone scales. What if we started at midi pitch 36
(C2) and increase it by either 5 (perfect fourth) or 7 (perfect fifth). The result is an arppegio of
4ths and 5ths. This raises a potential problem: what if the value exceeds a reasonable limit for a
midi value? If we choose 24 values and 7 is chosen half the time and 5 the other half, the average
increase would be 6. 6*24 is 144, + 36 is 180, and beyond a reasonable range for midi. (120, or
C9, is about as high as you want to go for pitched values at least.) Here is another area where SC
is more powerful than standard synthesizers. It understands logic. I can say "if you exceed a
given value, reset midi pitch. The syntax for the if statement is if(test statement, {true function},
{false function}). The false function can be left off. The test statement usually includes some
type of comparison such as > (greater than), < (less than), == (equals), != (does not equal).
(
Synth.scope({
var pitchArray, midiPitch;
midiPitch = 36;
pitchArray = Array.fill(
12, //12 elements in the array
{ //use this function to generate values
midiPitch = midiPitch + [5, 7].choose; //add 7 or 5
if(midiPitch > 120, //test to see if 120
{midiPitch = 36} //if it is, reset it to 36
);
midiPitch //"return" midiPitch to the array
}
);
SinOsc.ar( //Audio source: sine oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
`pitchArray, //Array of pitches to be sequenced
Impulse.kr(8) //trigger for impulse: 8 times per second
)).midicps,
mul: 0.5 //volume of SinOsc
)
})
)

Now I'll scramble the pitchArray. Try changing the beginning pitch, the interval to increase midi
by (now [5, 7]), highest midi value, and the trigger. Could you figure out how to control the
trigger either with a random value, or another Ugen? I've also replaced SinOsc with a blip
oscillator and the variable out.

(
Synth.scope({
var pitchArray, midiPitch, out;
midiPitch = 36;
pitchArray = Array.fill(
12, //12 elements in the array
{ //use this function to generate values
midiPitch = midiPitch + [5, 7].choose; //add 2
if(midiPitch > 120, //test
{midiPitch = 36} //if it is, reset it to 36

222
);
midiPitch //"return" midiPitch to the array
}
);
pitchArray = pitchArray.scramble;
out = Blip.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
`pitchArray, //Array of pitches to be sequenced
Impulse.kr(8) //trigger for impulse: 8 times per second
)).midicps,
3, //number of harmonics
mul: 0.5 //volume of SinOsc
);
out
})
)

Next I add random values. These may not make a lot of sense with a single instance of the
instrument, but when we put it in a Pbind it will make more sense. In that case every time the
instrument is played by the Pbind new values will be chosen. To see how the random values
change just run the patch a few times.
(
Synth.scope({
var pitchArray, midiPitch, out, trigger;
midiPitch = 24 + 12.rand;
pitchArray = Array.fill(
4 + 22.rand, //between 4 and 26 elements in the array
{ //use this function to generate values
midiPitch = midiPitch + [5, 7].choose; //add 2 to prev
if(midiPitch > 120, //test to see if midiPitch is > 120
{midiPitch = 36} //if it is, reset it to 36
);
midiPitch //"return" midiPitch to the array
}
);
pitchArray = pitchArray.scramble;
trigger = 6 + 12.0.rand;
out = Blip.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
`pitchArray, //Array of pitches to be sequenced
Impulse.kr(trigger) //trigger for impulse
)).midicps,
2 + 6.rand, //number of harmonics
mul: 0.5 //volume of SinOsc
);
out
})
)

Here is the stereo array trick again. I'll expand the pitchArray in the sequencer to left and right
channels. The left channel is the regular pitchArray the right side is the pitchArray + 2, or the
same pitches a whole step higher. Try changing it to values other than 2.
(
Synth.scope({

223
[same as above]
out = Blip.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
[`pitchArray, `(pitchArray + 2)], //Array of pitches to be sequenced
Impulse.kr(trigger) //trigger for impulse: 8 times per second
)).midicps,
2 + 6.rand, //number of harmonics
mul: 0.5 //volume of SinOsc
);
out
})
)

Next I connect it to a reverb. This time we'll expand the reverb a little more and see what some
of the values do. They are covered in the comments. The last two arguments are multiply and
add. The multiply represents the "wet" signal, or the reverb. The add is the variable out, or the
input, or the dry signal. So I've put a Line.kr on both of these arguments so that the wet (add)
increases gradually as the dry signal becomes quieter.
(
Synth.scope({
[same as above]
2 + 6.rand, //number of harmonics
mul: 0.5 //volume of SinOsc
);
2.do({
out = AllpassN.ar( //Use AllpassN to generate reverb
out, //the signal source for AllpassN "input"
0.8, //max delay time. This should be greater
//than either number below
[0.8.rand, 0.8.rand], //actual delay time. A random value,
//different for left and right channels
1, //decay time of reverb
Line.kr(0, 0.3, 7), //mul, or amplitude of "wet" sound.
//From 0 (no sound) to 0.3 in 7 seconds
out*Line.kr(1.0, 0.6, 5) //"dry" signal, from 1.0,
//or maximum volume, to 0.6 over 5 seconds
)
});
out
})
)

A lot of the examples we've done cannot really be considered "tonal." Electronic treatement
results in sounds that defy tonal categories, but it doesn't have to exclude tonal systems. Here is
the same patch except that the array is filled with choices from the midi array [0, 4, 5, 7, 11],
which is a Maj seventh, sharp eleven chord (if built on C it would be C - 0, E - 4, F# - 6, G - 7,
and B -11. It's built on a base of either 0 (C), or 7 (F). So you could say this example uses a
CMaj7 #11 and an FMaj7 #11. Sort of in the key of G, or C. Try changing the chordSet array to
other chords. I also change the right channel pitchArray in the Sequencer to pitchArray + 12.
Adding 2 would comprimise the chord with seconds. Instead it is doubled at the octave.
(
Synth.scope({
var pitchArray, midiPitch, out, trigger, chordSet;

224
chordSet = [0, 4, 6, 7, 11]; // A Maj7 #11 chord
midiPitch = [0, 7].choose; //0 would be C, 7 would be F
pitchArray = Array.fill(
4 + 22.rand, //between 4 and 26 elements in the array
{ //use this function to generate values
chordSet.choose + midiPitch //choose pitch + midi
+ //add the chosen pitch to
(3 + 5.rand * 12) //an octave between octave 3 and 8
}
);
trigger = 6 + 12.0.rand;
out = Blip.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
[`pitchArray, `(pitchArray + 12)], //Array of pitches to be
[same as above]
})
)

Placed in a Pbind:

(
var seqInst;

seqInst = {

var pitchArray, midiPitch, out, trigger, chordSet, env1;


env1 = Env.perc(0.001, 5 + 5.0.rand, 0.2 + 0.7.rand);
chordSet = [0, 4, 6, 7, 11]; // A Maj7 #11 chord
midiPitch = [0, 7, 5, 2].choose; //0 would be C, 7 would be G
pitchArray = Array.fill(
4 + 22.rand, //between 4 and 26 elements in the array
{ //use this function to generate values
chordSet.choose + midiPitch //choose a chord tone
+ //add the chosen pitch to
(3 + 5.rand * 12) //an octave between octave 3 and 8
}
);
trigger = 6 + 12.0.rand;
out = Blip.ar( //Audio source: sinc oscillator
(Sequencer.kr( //frequency control for SinOsc; a sequencer
[`pitchArray, `(pitchArray + 12)], //Array of pitches
Impulse.kr(trigger) //trigger for impulse: 8 times per second
)).midicps,
2 + 4.rand, //number of harmonics
mul: 0.5 //volume of SinOsc
);
2.do({
out = AllpassN.ar( //Use AllpassN to generate reverb
out,
0.8,
[0.8.rand, 0.8.rand],
1, //decay time of reverb
Line.kr(0, 0.3, 7),
out*Line.kr(1.0, 0.6, 5)
)
});

225
out*EnvGen.kr(env1)

};

Pbind(
\dur, Pfunc({[1.0, 3.333, 5].choose}),
\ugenFunc, seqInst
).play
)

Patch V Filter

This patch demonstrates filtering.

There are a number of filter modules availabel in SC. Each works with a wave form that contains
a rich upper harmonic structure. It filters some of the harmonics and lets other pass. The name
often describes which frequencies it allows to pass. For example, a Low Pass Filter will filter out
the upper frequencies and allow the low frequencies to pass. A High Pass Filter will filter the low
frequencies and allow the upper harmonics to pass through. A band pass will allow frequencies
outside of a band to pass through. Here are examples of each with a mouse control. The first
argument is the input frequency, the second is the cutoff, or the frequency above which, or below
which, frequencies will be allowed to pass through. The frequency for the input wave is low (100
Hz). Since it is a saw wave it generates a full spectrum of harmonics at multiples of 2, 3, 4, etc.
So the harmonics are 200, 300, 400, 500, 600, etc
(
Synth.scope({
RLPF.ar( //resonant low pass filter
Saw.ar(100, 0.2), //input wave at 100 Hz
MouseX.kr(100, 10000) //cutoff frequency
)}
)
)

Same thing, but with a high pass filter. This filter allows high frequencies to pass, and low
frequencies are filtered.

(
Synth.scope({
RHPF.ar( //resonant low pass filter
Saw.ar(100, 0.2), //input wave at 100 Hz
MouseX.kr(100, 10000) //cutoff frequency
)}
)
)

These are "resonant" filters because you can supply a value that represents how much each upper
harmonic will resonate.

Here is a filter being controlled with a SinOsc. Compare it to the next patch.

226
(
Synth.scope({
RLPF.ar(
Saw.ar(100, 0.2),
SinOsc.kr(0.2, 0, 900, 1100)
)})
)

With resonant control. Notice that as you approach the right (values nearing 0.001) you can hear
each of the upper harmonics stand out.

(
Synth.scope({
RLPF.ar(
Saw.ar(100, 0.2),
SinOsc.kr(0.2, 0, 900, 1100),
MouseX.kr(1.0, 0.001) //resonance, or "Q"
)})
)

I usually set it to between 0.001 and 0.01

You can control both the parameters of the input Ugen and the cutoff of the filter. This uses an
LFNoise to control the frequency of the saw, and another LFNoise to control the cutoff of the
filter.

(
Synth.scope({
RLPF.ar(
Saw.ar( //input wave
LFNoise1.kr(0.3, 50, 100),//freq of input
0.1
),
LFNoise1.kr(0.1, 4000, 4400), //cutoff freq
0.04 //resonance
)})
)

Next I substitute a Pulse for the Saw with a very narrow width and a low volume (add).

Synth.scope({
var freq;
freq = LFNoise1.kr(0.3, 50, 100);
RLPF.ar(
Pulse.ar( //input wave
freq,//freq of input
0.1, //pulse width
0.1 //add, or volume of pulse
),

227
LFNoise1.kr(0.1, 4000, 4400), //cutoff freq
0.04 //resonance
)})
)

Instead of using an LFNoise for the Pulse freq, I'll use a sequencer. I'll set up two arrays, one for
left channel, one for right.

(
Synth.scope({
var freq, leftArray, rightArray;
leftArray = Array.fill(20, {40 + 20.rand});
rightArray = Array.fill(19, {40 + 20.rand});

freq = Sequencer.kr(
[`(leftArray.midicps), `(rightArray.midicps)],
Impulse.kr(12)
);
RLPF.ar(
Pulse.ar( //input wave
freq,
0.1, //pulse width
0.1 //add, or volume of pulse
),
LFNoise1.kr(1, 4000, 4400), //cutoff freq
0.04 //resonance
)})
)

Now here is a clever trick. In the LFNoise1, which I use to control the cutoff frequency, I have a
multiply of 4000 and an add of 4400. That results in values of 4400 + and - 4000 (400 to 8400).
The default output of the Ugen is 1 to -1, and we know that when I multiply that by 4000 I end
up with +4000 and -4000. But if I were to multiply it by -4000, I get the opposite values: -4000
(1* (-4000) = -4000) to +4000 (-1*(-4000) = 4000). In other words, all the values would be
opposite that of 4000. If I had two Ugens, one with 4000 and one with -4000 in the mul
parameter, then whenever the 4000 version was at, say 150, the -4000 would be -150. They are
mirror images of each other. So in the LFNoise1 I use an array to split the mul argument into a
stereo pair using 4000 for one and -4000 for the other.

(
Synth.scope({
var freq, leftArray, rightArray;
leftArray = Array.fill(20, {40 + 20.rand});
rightArray = Array.fill(19, {40 + 20.rand});

freq = Sequencer.kr([`(leftArray.midicps),
`(rightArray.midicps)], Impulse.kr(12));
RLPF.ar(
Pulse.ar( //input wave
freq,

228
0.1, //pulse width
0.1 //add, or volume of pulse
),
LFNoise1.kr(1, [4000, -4000], 4400), //cutoff freq
0.04 //resonance
)})

Add an envelope

Synth.scope({
var freq, out, leftArray, rightArray, env;

env = Env.linen(2.0.rand, 1 + 1.0.rand, 2.0.rand);

leftArray = Array.fill(20, {40 + 20.rand});


rightArray = Array.fill(19, {40 + 20.rand});

freq = Sequencer.kr([`(leftArray.midicps), `(rightArray.midicps)],


Impulse.kr(12));
out = RLPF.ar(
Pulse.ar( //input wave
freq,
0.1, //pulse width
0.1 //add, or volume of pulse
),
LFNoise1.kr(1, [4000, -4000], 4400), //cutoff freq
0.04 //resonance
);

out*EnvGen.kr(env)
})
)

Add a Pbind

(
var i1;

i1 = {

var freq, out, leftArray, rightArray, env;

env = Env.linen(2.0.rand, 1 + 1.0.rand, 2.0.rand);

leftArray = Array.fill(20, {40 + 20.rand});


rightArray = Array.fill(19, {40 + 20.rand});

229
freq = Sequencer.kr([`(leftArray.midicps), `(rightArray.midicps)],
Impulse.kr(12));
out = RLPF.ar(
Pulse.ar( //input wave
freq,
0.1, //pulse width
0.1 //add, or volume of pulse
),
LFNoise1.kr(1, [4000, -4000], 4400), //cutoff freq
0.04 //resonance
);

out*EnvGen.kr(env)
};

Pbind(
\dur, Pfunc({[0.2, 0.7, 1, 3.3, 5].choose}),
\ugenFunc, i1
).scope
)

Using Pbind

In each of the patches I end with a Pbind. Pbind is a tool for bringing several different patches
together in a single composition. Pbind understands the .play and .scope message. It works like a
Synth, but you have the option of using different instruments and scheduling events and event
times. The Pbind uses an environment (about which you can read more in the help files). It also
has a protoEvent that contains default values for each event. Each aspect of the event is called a
binding. They are analogous to arguments in a Ugen. Here is an example with just one binding,
that of duration, or next event. The binding is preceded with a backslash '\' and name of that
binding, then a comma then the value or function for values that apply to that binding. Change
the 0.25 to confirm it is changing the next event.
(
Pbind(
\dur, 0.25
).play
)

Here are some other binding values. Try changing each one. Noticethat you can make the sustain
greater than the duration.
(
Pbind(
\dur, 1,
\sustain, 0.15,
\midinote, 62
).play
)

230
There are lots of bindings we could work with, and we do a lot in the next semester (computer
assisted composition). But for our projects we only need to be concerned with two: dur, or next
event, and ugenFunc, or the actual patch being used to generate sounds. In the example above
you may have noticed the actual sound was something like a simple blip instrument without
much character or interest. That was the default instrument. You should also have noticed that it
had an envelope built in. That is, it decayed away by itself. Having envelopes on each of your
patches is essential when working with Pbind.

Here is a simple Synth patch with an envelope.

(
Synth.play(
{
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.5);
Blip.ar(600, numharm: 4, mul: EnvGen.kr(e))
}
)
)

We are concerned with the ugenFunc for the patch. It is everything between the braces. We can
use that function in the Pbind, and in this case we have no more need for the Synth, since the
Synth is part of the Pbind environment. The difference between the Synth and the Pbind is that
the Synth plays it once, the Pbind plays it over and over.
(
Pbind(
\dur, 1,
\ugenFunc, {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.5);
Blip.ar(600, numharm: 4, mul: EnvGen.kr(e))
}
).play
)

Notice that if I change the frequency parameter and numharm parameter to a random choice,
then each time it plays a new pitch and number of harmonics is chosen. This time I'll use rrand,
which returns a random value between the range of the two arguments.

(
Pbind(
\dur, 1,
\ugenFunc, {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.5);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))

231
}
).play
)

For clarity, I'll place the ugenFunc above and store it in a variable. Nothing else is different about
this next patch.
(
var blipInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.5);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

Pbind(
\dur, 1,
\ugenFunc, blipInst
).play
)

Where Pbind becomes useful is in its ability to stream values. That is to feed to the Synth, or in
this case the Synth that is buried in the Pbind environment, a series of values. There is a good
discussion of streams and patterns in the documentation, which you should read. But for our
class I will describe 4. Pfunc, Pseq, Pshuf, and Prand. Pfunc returns the next value for a given
function. Pseq returns values in sequence. Prand returns a random choice from a given set. Pshuf
scrambles the elements for each repeat. For Prand, Pshuf, and Pseq the arguments and syntax are
the same:

Prand([list of items], numberOfRepeats)

Pseq([list of items], numberOfRepeats)

Pshuf([list of items], numberOfRepeats)

For Pfunc, you just describe the function in braces:

Pfunc({function})

I'll demonstrate Pfunc first. I will insert the function {rrand(0.2, 2.0)}, which means choose a
value between 0.2 and 2.0. The Pfunc replaces the static value of 1, and works in the same way
we have previously used Ugens to control and change static values in patches.

(
var blipInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.5);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))

232
};

Pbind(
\dur, Pfunc({rrand(0.2, 2.0)}), //Choose a value between 0.2 and 2.0
\ugenFunc, blipInst
).play
)

Same example, but using a Pseq. The Pseq steps through the values in the array one at a time and
repeats that array. The list of values is the first argument (an array), followed by the number of
repeats (6).

(
var blipInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

Pbind(
\dur, Pseq([1.2, 1.0, 0.8, 0.6, 0.4, 0.2], 6),
\ugenFunc, blipInst
).play
)

In the example above 36 events are played. The array has 6 events and the entire array is
repeated 6 times (total of 36). If you want the sequence to go on forever you can use the symbol
'inf'.
(
var blipInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

Pbind(
\dur, Pseq([1.2, 1.0, 0.8, 0.6, 0.4, 0.2], inf),
\ugenFunc, blipInst
).play
)

Prand chooses random values for the given repeats. Notice that the Pseq moved through the
entire array 6 times (36 total), where Prand choses only 6 values.

(
var blipInst;

233
blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

Pbind(
\dur, Prand([1.2, 1.0, 0.8, 0.6, 0.4, 0.2], 6),
\ugenFunc, blipInst
).play
)

Before I show you Pshuf, let me point out that you can nest each of the three Pseq, Prand, and
Pshuf. Each stream will step through its values the given number of times then move on to the
next stream. For example:
Pseq([Prand([1, 2], 4), Pseq([1, 2, 3], 5), Pshuf([1, 2, 3], 2)], 5)

Will pick from 1 and 2 four times, then 1, 2, 3, five times, then a scrambled verion of the 1, 2, 3,
pattern twice, then repeat that entire process five times.

Here is the same example spread out:

Pseq(
[ //Beginning of the entire list
Prand([1, 2], 4), //first item: pick 1, or 2 four times
Pseq([1, 2, 3], 5), //second: sequence through 1, 2, 3 five times
Pshuf([1, 2, 3], 2) //third: shuffle and play 1, 2, 3 two times
], 5 //repeat all of the above 5 times
)

The example below uses obvious values so you can follow along.

(
var blipInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

Pbind(
\dur, Pseq(
[ //Beginning of list of values
Pseq([0.75, 0.25], 6), //seq of 0.75 and 0.25 six times
Prand([0.1, 0.2, 0.3, 0.4, 0.5], 8), //choose 8 of these
Pshuf([0.1, 0.2, 0.4], 4) //shuffle and play these 4xs
], 4), //repeat the entire thing four times
\ugenFunc, blipInst
).play

234
)

You can do the same with the ugenFunc. I'll add another instrument to demonstrate. Notice also
that I use "inf" for the number of repeats in the outermost Pseq for ugenFunc. That means it will
go forever, or in this case until the Pseq for dur ends.

(
var blipInst, pulseInst;

blipInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
Blip.ar(rrand(200.0, 600.0), numharm: rrand(1, 6), mul: EnvGen.kr(e))
};

pulseInst = {
var e;
e = Env.linen(0.001, 0.2, 0.5, 0.2);
SinOsc.ar(LFPulse.kr(12, mul: 100 + 300.rand, add: 800 + 800.rand),
mul: EnvGen.kr(e))
};

Pbind(
\dur, Pseq(
[ //Beginning of list of values
Pseq([0.75, 0.25], 6), //seq of 0.75 and 0.25 six times
Prand([0.1, 0.2, 0.3, 0.4, 0.5], 8), //choose 8 of these in a row
Pshuf([0.1, 0.2, 0.4], 4) //shuffle and play these 4xs
], 4), //repeat the entire thing four times
\ugenFunc, Pseq(
[ //Begin list
Pseq([blipInst], 4), //play blip 4 times
Prand([blipInst, pulseInst], 4), //choose between the two four times
Pseq([blipInst, pulseInst], 8), //play the two in order eight times
Pseq([pulseInst], 4) //finish with pulsInst four times
], inf)
).play
)

This is as much as you need to know for your final projects. What you need to do is develope
your patches using the presets in the previous files. Then remove the ugenFunc and set it to a
variable, which is then used by the Pbind. But if you would like to move on, there are some
additional tricks.

Sharing values in an environment.

Another useful feature of an environment and the Pbind prototype is that you can share values
between instruments. Suppose, for example, you wanted to use the two instruments above, but
have them play a series of pitches between them. That is, the first instrument plays the first pitch,
the second the second, then the first instrument plays the third pitch and so on. You need some
method for passing the common sequence of pitches to the instruments.

235
The method for doing this is to declare an argument at the beginning of the instrument function.
That argument is then matched with a binding in the Pbind. In the Pbind you then add your own
binding with a backslash followed by the symbol you want to use.

In the following example I've simplified the instruments so you can identify pitch. The first
instrument is a blip, the second a pulse. In addition I've created a new binding: pitchSeries, and a
matching argument in the instruments. I've also created an attack binding, and a volume binding.

var blipInst, pulseInst;

blipInst = {
arg pitchSeries, attSeries, volume;
var e;
e = Env.linen(attSeries, 0.2, 0.5, volume);
Blip.ar(pitchSeries, 3, mul: EnvGen.kr(e))
};

pulseInst = {
arg pitchSeries, attSeries, volume;
var e;
e = Env.linen(attSeries, 0.2, 0.5, volume);
Pulse.ar(pitchSeries, mul: EnvGen.kr(e))
};

Pbind(
\pitchSeries, Pseq([60, 62, 64, 65, 67].midicps, 5),
\attSeries, Pseq([0.0001, 0.1, 0.5], inf),
\volume, Pseq([0.1, 0.2, 0.3, 0.4], inf),
\dur, Pseq([0.5], inf),
\ugenFunc, Pseq([blipInst, pulseInst], inf)
).play

236
C. Pitch Chart:

Notes for the pitch chart:

PC-Pitch Class, MN-Midi Number, Int-Interval, MI-Midi Interval, ETR-Equal Tempered Ratio,
ETF-ET Frequency, JR-Just Ratio, JC-Just Cents, JF-Just Frequency, PR-Pythagorean Ratio, PC-
Pyth. Cents, PF-Pyth. Freq., MR-Mean Tone Ratio, MC-MT Cents, MF-MT Freq.

The shaded columns are chromatic pitches, and therefor correspond with the black keys of the
piano. The idea is to read the values that correspond with a pitch the way one would read a piano
keyboard.

Italicized values are negative.

Some scales show intervals, ratios, and cents from middle C. The Pythagorean scale is mostly
positive numbers showing intervals and ratios from C1. I did this to show the overtone series. All
ratios with a 1 as the denominator are overtones. They are bold. To invert the ratios just invert
the fraction; 3:2 becomes 2:3.

Just, Pyth., and Mean-tone are all calculated from an equal tempered C, 261.626. I first thought
of doing it from A 440. This seemed to make sense, but then the pitches would be for the key of
A, not C. There's something convoluted about deriving just ratios from an equal tempered pitch,
but there you go.

I restart the cents chart at 000 for C4 in the PC column because I just don't think numbers higher
than that are very useful.

A complete chart (with Pythagorean ratios, and the complete piano keyboard) is available from
the author.

References: Harvard Dictionary of Music, Interval. Computer Music by Charles Dodge, p. 42.

237
PC MN Int MI ETR ETF JR JR JF PR PF MR MF
C2 36 P15 24 0.250 65.41 1:4 0.250 65.406 0.500 65.406 0.250 65.406
Db 37 M14 23 0.265 69.30 4:15 0.266 69.767 0.527 68.906 0.268 70.116
D 38 m14 22 0.281 73.42 5:18 0.278 72.674 0.562 73.582 0.280 73.255
Eb 39 M13 21 0.298 77.78 3:10 0.300 78.488 0.592 77.507 0.299 78.226
E 40 m13 20 0.315 82.41 5:16 0.312 81.758 0.633 82.772 0.313 81.889
F 41 P12 19 0.334 87.31 1:3 0.333 87.209 0.667 87.219 0.334 87.383
F# 42 A11 18 0.354 92.50 16:4 0.355 93.023 0.702 93.139 0.349 91.307
G 43 P11 16 0.375 98.00 3:8 0.375 98.110 0.750 98.110 0.374 97.848
Ab 44 M10 16 0.397 103.8 2:5 0.400 104.65 0.790 95.297 0.400 104.65
A 45 m10 15 0.421 110.0 5:12 0.416 109.01 0.844 110.37 0.418 109.36
Bb 46 M9 14 0.446 116.5 4:9 0.444 116.28 0.889 116.29 0.447 116.95
B 47 m9 13 0.472 123.5 15:3 0.469 122.64 0.950 124.17 0.467 122.18
C3 48 P8 12 0.500 130.8 1:2 0.500 130.81 1.000 130.81 0.500 130.81
Db 49 M7 11 0.530 138.6 8:15 0.533 139.53 1.053 137.81 0.535 139.97
D 50 m7 10 0.561 146.8 5:9 0.556 145.35 1.125 147.16 0.559 146.25
Eb 51 M6 9 0.595 155.6 3:5 0.600 156.98 1.185 155.05 0.598 156.45
E 52 m6 8 0.630 164.8 5:8 0.625 163.52 1.266 165.58 0.625 163.52
F 53 P5 7 0.668 174.6 2:3 0.667 174.42 1.333 174.41 0.669 175.03
F# 54 A4 6 0.707 185.0 32:4 0.711 186.05 1.424 186.24 0.699 182.88
G 55 P4 5 0.749 196.0 3:4 0.750 196.22 1.500 196.22 0.748 195.70
Ab 56 M3 4 0.794 207.7 4:5 0.800 209.30 1.580 190.56 0.800 209.30
A 57 m3 3 0.841 220.0 5:6 0.833 218.02 1.688 220.75 0.836 218.72
Bb 58 M2 2 0.891 233.1 8:9 0.889 232.56 1.778 232.55 0.895 234.16
B 59 m2 1 0.944 246.9 15:1 0.938 245.27 1.898 248.35 0.935 244.62
C4 60 P1 0 1.000 261.6 1:1 1.000 261.63 2.000 261.63 1.000 261.63
Db 61 m2 1 1.059 277.2 16:1 1.067 279.07 2.107 275.62 1.070 279.94
D 62 M2 2 1.122 293.7 9:8 1.125 294.33 2.250 294.33 1.118 292.50
Eb 63 m3 3 1.189 311.1 6:5 1.200 313.95 2.370 310.06 1.196 312.90
E 64 M3 4 1.260 329.6 5:4 1.250 327.03 2.531 331.12 1.250 327.03
F 65 P4 5 1.335 349.2 4:3 1.333 348.83 2.667 348.85 1.337 349.79
F# 66 A4 6 1.414 370.0 45:3 1.406 367.91 2.848 372.52 1.398 365.75
G 67 P5 7 1.498 392.0 3:2 1.500 392.44 3.000 392.44 1.496 391.39
Ab 68 m6 8 1.587 415.3 8:5 1.600 418.60 2.914 381.12 1.600 418.60
A 69 M6 9 1.682 440.0 5:3 1.667 436.04 3.375 441.49 1.672 437.44
Bb 70 m7 10 1.782 466.2 9:5 1.800 470.93 3.556 465.10 1.789 468.05
B 71 M7 11 1.888 493.9 15:8 1.875 490.55 3.797 496.70 1.869 488.98
C5 72 P8 12 2.000 523.3 2:1 2.000 523.25 4.000 523.25 2.000 523.25
Db 73 m9 13 2.118 554.4 32:1 2.133 558.14 4.214 551.25 2.140 559.88
D 74 M9 14 2.244 587.3 9:4 2.250 588.66 4.500 588.66 2.236 585.00
Eb 75 m10 15 2.378 622.3 12:5 2.400 627.90 4.741 620.15 2.392 625.81
E 76 M10 16 2.520 659.3 5:2 2.500 654.06 5.063 662.24 2.500 654.06
F 77 P11 17 2.670 698.5 8:3 2.667 697.67 5.333 697.66 2.674 699.59
F# 78 A11 18 2.828 740.0 45:1 2.813 735.82 5.695 745.01 2.796 731.51
G 79 P12 19 2.996 784.0 3:1 3.000 784.88 6.000 784.88 2.992 782.78
Ab 80 m13 20 3.174 830.6 16:5 3.200 837.20 5.827 762.28 3.200 837.20
A 81 M13 21 3.364 880.0 10:3 3.333 872.09 6.750 882.99 3.344 874.88
Bb 82 m14 22 3.564 932.3 18:5 3.600 941.85 7.111 930.21 3.578 936.10
B 83 M14 23 3.776 987.8 15:4 3.750 981.10 7.594 993.36 3.738 977.96
C6 84 P15 24 4.000 1047 4:1 4.000 1046.5 8.000 1046.5 4.000 1046.5

C# 61 A1 1 1.059 277.2 25:24 1.042 272.53 1.068 279.38 1.045 279.94


Gb 66 d5 6 1.414 370.0 64:45 1.422 372.09 1.405 367.50 ** **
G# 80 A5 8 3.174 830.6 25:16 1.563 408.79 1.602 419.07 ** **
A# 70 m7 10 1.782 466.2 45:16 1.758 459.89 ** ** 1.869 488.98

238
D. UNIT GENERATORS:

Here is a list of unit generators available in SC. SC is being developed all the time and the list is
most certainly out of date.

SinOsc .. sine table lookup oscillator


Unary Operators SinOsc.ar(freq, phase, mul, add) returns values
between -1 and 1
squared .. a*a
cubed .. a*a*a SinOsc.ar returns values between -1 and 1.
sqrt .. square root
exp .. exponential
midicps .. MIDI note number to cycles per
second
cpsmidi .. cycles per second to MIDI note
number
midiratio .. convert an interval in MIDI notes
into a frequency ratio
ratiomidi .. convert a frequency ratio to an
interval in MIDI notes
FSinOsc .. very fast sine oscillator
FSinOsc.ar(freq, mul, add)
Binary Operators
Klang .. bank of fixed frequency sine oscillators
+ .. addition Klang.ar(inSpecificationsArrayRef, iFreqScale,
- .. subtraction iFreqOffset, mul, add)
* .. multiplication
/ .. division Blip .. band limited impulse oscillator
% .. float modulo Blip.ar(freq, numharm, mul, add)
** .. exponentiation
< .. less than This is a quick way to generate a sound with a
<= .. less than or equal specified number of harmonics. The second
> .. greater than parameter will determine the harmonic content.
>= .. greater than or equal Try controlling the numharm parameter with
== .. equal another Ugen or a .rand or .choose.
!= .. not equal
min .. minimum of two Saw .. band limited sawtooth oscillator
max .. maximum of two Saw.ar(freq, mul, add)
round .. quantization by rounding
trunc .. quantization by truncation Values supposedly between -1 and 1, but I've
[There are many more. Look in the help file.] found this changes with frequency. Use Saw for
audio levels and LFSaw as a LFO control.
Oscillators
COsc .. chorusing oscillator
COsc.ar(table, freq, beats, mul, add)

COsc2 .. dual table chorusing oscillator


COsc2.ar(table1, table2, freq, beats, mul, add)
Pulse .. band limited pulse wave oscillator
Pulse.ar(freq, duty, mul, add)

239
Can be used as an audio signal, but is unreliable
as a control.

Impulse .. non band limited impulse oscillator


Impulse.ar(freq, mul, add)

A single impulse, best used as a trigger.


Phasor .. sawtooth for phase input
Phasor.ar(freq, mul, add)

LFTri .. low freq (i.e. not band limited) triangle


wave oscillator
LFTri.ar(freq, mul, add)

Can be used as an audio signal and a control.


Returns values between -1 and 1.

Noise
The code for Noise generators is the same as
Oscillators.

WhiteNoise .. white noise


WhiteNoise.ar(mul, add)
LFSaw .. low freq (i.e. not band limited)
sawtooth oscillator
LFSaw.ar(freq, mul, add) Audio range random signal. No filter.

Can be used as an audio signal and a control.


Returns values between -1 and 1.

PinkNoise .. pink noise


PinkNoise.ar(mul, add)

Audio range random signal with filter.


LFPulse .. low freq (i.e. not band limited) pulse
wave oscillator
LFPulse.ar(freq, width, mul, add)

Can be used as an audio signal, trigger or a


control. Returns values between 0 and 1.

240
BrownNoise .. brown noise interpolation
BrownNoise.ar(mul, add) LFNoise1.ar(freq, mul, add)

Audio range random signal with filter. Low frequency signal returning interpolated
values between -1 and 1.

GrayNoise .. bit flip noise


GrayNoise.ar(mul, add)
LFNoise2 .. low frequency noise, quadratic
interpolation
Audio range random signal with filter.
LFNoise2.ar(freq, mul, add)

LFClipNoise .. low frequency clipped noise


LFClipNoise.ar(freq, mul, add)
ClipNoise .. clipped noise
ClipNoise.ar(mul, add) The noise is clipped so that all values are max at
-1 and 1, no values in-between. Can be used as a
Audio range random signal that is clipped; all random trigger.
values are -1 or 1.

Crackle .. chaotic noise function


LFNoise0 .. low frequency noise, no Crackle.ar(chaosParam, mul, add)
interpolation
LFNoise0.ar(freq, mul, add) Dust .. random positive impulses
Dust.ar(density, mul, add)
Low frequency signal returning discrete values
between -1 and 1.

Dust2 .. random bipolar impulses


Dust2.ar(density, mul, add)
LFNoise1 .. low frequency noise, linear

241
Line .. line
Line.ar(start, end, dur, mul, add)

XLine .. exponential growth/decay


XLine.ar(start, end, dur, mul, add)

Sequencer .. clocked values


Filters Sequencer.ar(sequence, clock, mul, add)

Resonz .. general purpose resonator StepClock .. impulse clocks at timed intervals


Resonz.ar(in, freq, bwr, mul, add) StepClock.ar(stepArrayRef, rate, mul, add)

RLPF .. resonant low pass filter Amplitude Operators


RLPF.ar(in, freq, rq, mul, add)
Compander .. compresser, expander, limiter,
RHPF .. resonant high pass filter gate, ducker
RHPF.ar(in, freq, rq, mul, add) Compander.ar(input, control, threshold,
slopeBelow, slopeAbove, clampTime,
LPF .. Butterworth low pass relaxTime, mul, add)
LPF.ar(in, freq, mul, add)
Normalizer .. flattens dynamics
HPF .. Butterworth high pass Normalizer.ar(input, level, lookAheadTime)
HPF.ar(in, freq, mul, add)
Limiter .. peak limiter
BPF .. Butterworth band pass Limiter.ar(input, level, lookAheadTime)
BPF.ar(in, freq, rq, mul, add)
Amplitude .. amplitude follower
BRF .. Butterworth band reject Amplitude.ar(input, attackTime, releaseTime,
BRF.ar(in, freq, rq, mul, add) mul, add)

RLPF4 .. fourth order resonant low pass filter Pan2 .. stereo pan (equal power)
RLPF4.ar(in, freq, res, mul, add) Pan2.ar(in, pos, level)

Pan4 .. quad pan (equal power)


Controls Pan4.ar(in, xpos, ypos, level)

PanB .. ambisonic B-format pan


ControlIn .. read an external control source
ControlIn.kr(source, lagTime) PanB.ar(in, azimuth, elevation, gain)

EnvGen .. break point envelope PanAz .. azimuth panner


PanAz.ar(numChans, in, azimuth, level, width)
EnvGen.ar(levelArrayRef, durArrayRef, mul,
add, levelScale, levelBias, timeScale)
LinPan2 .. linear stereo pan
LinPan2.ar(in, pan)
Trig .. timed trigger
Trig.ar(in, dur)
LinPan4 .. linear quad pan
Trig1 .. timed trigger LinPan4.ar(in, xpan, ypan)
Trig1.ar(in, dur)
LinXFade2 .. linear stereo cross fade
LinXFade2.ar(l, r, pan)
Latch .. sample and hold
Latch.ar(in, trig)
LinXFade4 .. linear quad cross fade
Gate .. gate or hold LinXFade4.ar(lf, rf, lb, rb, xpan, ypan)
Gate.ar(in, trig)

242
PingPongN .. ping pong delay, no interpolation
Delays PingPongN.ar(leftIn, rightIn, maxdtime,
delaytime, feedback, mul, add)
Delay1 .. one sample delay
Delay1.ar(in, mul, add) PingPongL .. ping pong delay, linear
interpolation
Delay2 .. two sample delay PingPongL.ar(leftIn, rightIn, maxdtime,
Delay2.ar(in, mul, add) delaytime, feedback, mul, add)
DelayN .. simple delay line, no interpolation
DelayN.ar(in, maxdtime, delaytime, mul, add)
Samples and I/O
DelayL .. simple delay line, linear interpolation
DelayL.ar(in, maxdtime, delaytime, mul, add) PlayBuf .. sample playback from a Signal buffer
PlayBuf.ar(signal, sigSampleRate, playbackRate,
DelayA .. simple delay line, all pass interpolation offset, loopstart, loopend, mul, add)
DelayA.ar(in, maxdtime, delaytime, mul, add)
RecordBuf .. record or overdub audio to a Signal
CombN .. comb delay line, no interpolation buffer
CombN.ar(in, maxdtime, delaytime, decaytime, RecordBuf.ar(buffer, in, recLevel, preLevel,
mul, add) reset, run, loopMode)

CombL .. comb delay line, linear interpolation AudioIn .. read audio from hardware input
CombL.ar(in, maxdtime, delaytime, decaytime, AudioIn.ar(channelNumber)
mul, add)
DiskIn .. stream audio in from disk file
AllpassN .. all pass delay line, no interpolation DiskIn.ar(soundFile, loopFlag, startFrame,
AllpassN.ar(in, maxdtime, delaytime, decaytime, numFrames)
mul, add)
DiskOut .. stream audio out to disk file
MultiTap .. multi tap delay DiskOut.ar(soundFile, numFrames,
MultiTap.ar(delayTimesArray, levelsArray, in, channelArray)
mul, add) (5)

DelayWr .. write into a delay line Event Spawning


DelayWr.ar(buffer, in, mul, add)
Spawn .. timed event generation
TapN .. tap a delay line, no interpolation
Spawn.ar(eventFunc, numChannels, nextTime,
TapN.ar(buffer, delaytime, mul, add)
maxRepeats, mul, add)
TapL .. tap a delay line, linear interpolation
Voicer .. MIDI triggered event generation
TapL.ar(buffer, delaytime, mul, add)
Voicer.ar(eventFunc, numChannels,
midiChannel, maxVoices, mul, add)
TapA .. tap a delay line, all pass interpolation
TapA.ar(buffer, delaytime, mul, add)
XFadeTexture .. cross fade events
XFadeTexture.ar(eventFunc, sustainTime,
GrainTap .. granulate a delay line
transitionTime, numChannels, mul, add)
GrainTap.ar(buffer, grainDur, pchRatio,
pchDispersion, timeDispersion, overlap, mul,
OverlapTexture .. cross fade events
add)
OverlapTexture.ar(eventFunc, sustainTime,
transitionTime, overlap, numChannels, mul, add)
PitchShift .. time domain pitch shifter
PitchShift.ar(in, winSize, pchRatio,
Cycle .. spawn a sequence of events in a cycle
pchDispersion, timeDispersion, mul, add)
Cycle.ar(array, numChannels, nextTime,
maxRepeats, mul, add)

243
RandomEvent .. spawn an event at random
RandomEvent.ar(array, numChannels, nextTime,
maxRepeats, mul, add)

SelectEvent .. spawn an event chosen from a list


by a function
SelectEvent.ar(array, selectFunc, numChannels,
nextTime, maxRepeats, mul, add)

OrcScore .. play an event list with an orchestra


OrcScore.ar(orchestra, score, numChannels,
nextTime, maxRepeats, mul, add)

Misc
Scope .. write audio to a SignalView
Scope.ar(signalView, in)

Mix .. mixdown channels in groups


Mix.ar(channelsArray)

244
245

S-ar putea să vă placă și