Sunteți pe pagina 1din 305

THE OM COMPOSER’S

BOOK

Volume One
Collection Musique/Sciences
directed by Jean-Michel Bardez & Moreno Andreatta

The Musique/Sciences series contributes to our understanding of the relationship between


two activities that have shared intimate links since ancient times: musical and scientific
thought. The often-cited Quadrivium (music, astronomy, geometry, and arithmetic) re-
minds us that, in an age imbued with the spirit of the Gods, it was not uncommon to
think of these two modes of thought as twins. During the twentieth century, music and
science developed new links, establishing relationships with mathematics and opening
new lines of musical research using information technology. Modeling, in its theoretical,
analytical and compositional aspects, is more than ever at the center of a rich musicologi-
cal debate whose philosophical implications enrich both musical and scientific knowledge.
The pleasure of listening is not diminished when it is more active, more aware of certain
generating ideas—au contraire.

Published works

Gérard Assayag, François Nicolas, Guerino Mazzola (dir.), Penser la musique avec les
mathématiques ?, 2006
André Riotte, Marcel Mesnage, Formalismes et modèles, 2 vol., 2006

Forthcoming

Moreno Andreatta, Jean-Michel Bardez, John Rahn (dir.), Autour de la Set Theory
Guerino Mazzola, La vérité du beau dans la musique
Franck Jedrzejewski, Mathematical Theory of Music
THE OM COMPOSER’S
BOOK

Volume One

Edited by
Carlos Agon, Gérard Assayag and Jean Bresson

Preface by
Miller Puckette

Collection Musique/Sciences
Editorial Board
Carlos Agon, Ircam/CNRS, Paris
Gérard Assayag, Ircam/CNRS, Paris
Marc Chemillier, University of Caen
Ian Cross, University of Cambridge
Philippe Depalle, McGill University, Montréal
Xavier Hascher, University of Strasbourg
Alain Poirier, National Conservatory of Music and Dancing, Paris
Miller Puckette, University of California, San Diego
Hugues Vinet, Ircam/CNRS, Paris

Editorial Coordination
Claire Marquet

Page Layout
Carlos Agon and Jean Bresson

Texts translated by
Justice Olsson

Cover Design
Belleville

Tous droits de traduction, d’adaptation et de reproduction par tous procédés réservés pour tous pays.

Le code de la propriété intellectuelle du 1er juillet 1992 n’autorise, aux termes de l’article L. 122-5,
2e et 3e a), d’une part, « que les copies ou reproductions strictement réservées à l’usage du copiste et
non destinées à une utilisation collective » et, d’autre part, « que les analyses et les courtes citations
dans un but d’exemple et d’illustration ». « Toute représentation ou reproduction intégrale ou partielle,
faite sans le consentement de l’auteur ou ayant cause, est illicite » (article L.122-4). Cette représentation
ou reproduction par quelque procédé que ce soit constituerait donc une contrefaçon sanctionnée par les
articles L. 335-2 et suivants du Code de la propriété intellectuelle.

ISBN 2-7521-0027-2 et 2-84426176-0


c 2006 by Editions DELATOUR FRANCE/Ircam-Centre Pompidou

www.editions-delatour.com
www.ircam.fr
Contents

Preface ix

Introduction 1

Writing a Homage to Marin Mersenne: Tombeau de Marin Mersenne for


Theorbo and Synthesiser (General Midi)
Michel Amoric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Electronics in Kaija Saariaho’s Opera, L’Amour de loin


Marc Battier and Gilbert Nouno . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Vuza Canons into the Museum


Georges Bloch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

TimeSculpt in OpenMusic
Karim Haddad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

The Genesis of Mauro Lanza’s Aschenblume and the Role of Computer


Aided Composition Software in the Formalisation of Musical Process
Juan Camilo Hernández Sánchez . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Generating Melodic, Harmonic and Rhythmic Processes in ”K...”, an


Opera by Philippe Manoury
Serge Lemouton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Composing the Qualitative, on Encore’s Composition


Jean-Luc Hervé and Frédéric Voisin . . . . . . . . . . . . . . . . . . . . . . . . 99

Navigation of Structured Material in “second horizon” for


Piano and Orchestra (2002)
Johannes Kretz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

When the Computer Enables Freedom from the Machine (On an Outline
of the Work Hérédo-Ribotes)
Fabien Lévy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Some Applications of OpenMusic in Connection with the Program Modalys


Paola Livorsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Fractals and Writing, Six Fractal Contemplations
Mikhail Malt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Algorithmic Strategies in A Collection of Caprices


Paul Nauert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

Sculpted Implosions: Some Algorithms in a Waterscape of


Musique Concrète
Ketty Nez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

STRETTE
Hèctor Parra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Klangspiegel
Luı́s Antunes Pena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

Kalejdoskop for Clarinet, Viola and Piano


Örjan Sandred . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Flexible Time Flow, Set-Theory and Constraints


Kilian Sprotte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

To Touch the Inner Sound, Before it Becomes Music; to Dream About


Dreams, Before they Become Real
Elaine Thomazi-Freitas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Appendix. OpenMusic 279


Preface
The field of computer music can be thought of as having two fundamental branches,
one concerned with the manipulation of musical sounds, and the other concerned with
symbolic representations of music. The two are iconized by Max Mathews’s MUSIC
program and Lejaren Hiller’s ILIAC Suite, both of 1957, although both have important
antecedents. The two branches might provisionally be given the names “Computer Gen-
erated Music” (Denis Baggi’s term for it) and “Computer Aided Composition”—or CGM
and CAC for short. (In France the latter is called “Composition Aidée par Ordinateur”.
The corresponding English acronym, “CAC”, is less than mellifluous and someday we
should settle on a better one.)
As a field, CAC has flown a very different trajectory from CGM. While in the United
States the great strides between 1957 and 1980 were on the CGM side, in Europe during
the same period we saw work by Xenakis (starting as early as 1962), Koenig (Project 1,
1964), and many others that, taken together, can be seen as the first proof that CAC
could be widely useful in creating new music. Meanwhile in the United States, many
people, myself included, thought of CAC as a topic of computer science research, not
likely ever to give rise to tools useful to musicians. Interest in CAC has grown in the
USA in the intervening years, but excepting the brilliant example of David Cope, work
in the USA on this topic has lagged behind that in Europe.
Today CGM is ubiquitous and CAC appears futuristic. An entire generation of com-
posers and other musicians has learned to use the computer to synthesize and process
musical sound. It can be argued that the computer has been the one addition to the
classical orchestra since the advent of percussion early in the twentieth century. This is
a great achievement. CGM is now generally accepted, and the status of a musician in
the Mathews tradition essentially depends on how good his or her output sounds, in the
same way as that of an orchestral string or wind player. CGM has become a normal,
respectable, middle-class occupation.
The development of CAC, on the other hand, has seen a deepening realization that
the problems of the field are much more difficult than they may have appeared at first.
In hindsight, this should have been obvious to everyone all along: CGM is in effect
building instruments (which were previously made of wood and the like), but CAC is in
effect making the computer carry out thought processes previously carried out in human
brains. Clearly, a piece of wood is easier to understand than even a small portion of a
human brain. Ultimately, CAC researchers will have to settle for much less than a full
understanding of even a single musical phenomenon. The best that can be hoped for is
partial solutions to oversimplified versions of the real problems.

Computational Complexity
From my point of view, having come to computer music in the year 1979, the comparative
situations of the two branches in the year 2006 come as a surprise. To the user of a 16-bit
DEC PDP-11 computer, the manipulation of musical symbols looked trivial compared to
the hard practical problems posed by the sheer size of the problem of sound synthesis.

ix
x

A five-second, one-channel soundfile was a large object in 1979, and the realization of a
piece of computer music lasting ten minutes could easily require a solid week of computer
time. Back then I was exceedingly lucky even to have that possibility. Now, in 2006,
three IRCAM pieces from the 1980s can be run simultaneously in real time on a computer
small and light enough that it could easily be thrown several rows into the audience.
The symbol manipulators, on the other hand, whose programs once only took a few
thousand arithmetic operations and a one- or two-inch stack of punched cards to run, now
take their place as the heaviest computer users in all of music. The growth in complexity
of CAC algorithms appears to have outrun the ability of computers to run them. The
Markov chains of the early days (a few hundred arithmetic operations per note, say) have
given way to combinatorial search and optimization problems requiring as many trillions
of calculations as the user can afford to wait for. The imagination of composers and
researchers in CAC far outstrips the supply of available computation power.
In general, the cost of CGM per audio sample has not remained constant, but has
not grown quickly. The best CGM of the seventies, thirty years ago say, probably cost
less than ten thousand arithmetic operations per sample of output. The speedup in
computing in the intervening years has allowed us the luxury of running the classic CGM
algorithms in real time, thus changing the nature of the pursuit fundamentally. The
algorithms themselves have grown somewhat more complex as well, but the universal
preference for real-time synthesis and processing has put a lid on that growth.
CAC has not become real-time at all. Since it appeals to the composer in us (whereas
CGM appeals to the performer in us), it seems reasonable to expect that CAC software
will continue to absorb all the computing resources that can possibly be brought to bear
on it. Anything allowed to grow will naturally do so.

Programmers and users

About 1970, when I started using the computer—there was only one in my town—
there was no distinction between computer users and computer programmers. It only
occasionally happened that someone used a program that someone else had written. For
the most part, each program was written for its own particular, special purpose. Over the
following decade, however, this began to change for two reasons. First, people learned
to write large, flexible programs (troff being my favorite example) powerful enough that
different users could turn a given program to very different purposes.
Second, and more subtly, the possibilities for combining different bits of software
together began to multiply. An important step was the invention of the Unix operating
system, which unified all I/O through one very simple and clean abstraction. This
permitted users to direct one program’s output to another program’s input, frequently
without the need even to choose a name for the data passing between the programs.
This made it possible for a relatively simple program such as “tr” to be put to all sorts
of different uses. Try it on a headerless soundfile, for instance, to put a classical music
lover’s teeth on edge, in a way I doubt the program’s original designer had imagined.
A parallel development was underway in the computer music community. Here the
roots of the idea reach all the way back to about 1958 when Max Mathews’s MUSIC N
programs began to offer reconfigurable unit generators, which the user configured in a
network to generate musical sounds, predating and anticipating the modular synthesizers
built by Moog and others in the 1960s. By the mid 1980s many researchers were thinking
xi

about trying to turn this notion of reconfigurability to use in passing data of other formats
than audio signals.
Programs themselves (such as MUSIC, Max/MSP, or OpenMusic) have become com-
plicated and difficult to develop, but once the paradigm for making interconnections has
been worked out, they are comparatively easy to extend. Users can contribute extensions
and benefit from each other’s work, without having to worry about the tricky stuff such
as GUIs or file formats.
In another sense, however, a patch is itself a program, and the job of connecting
simple functions together to make larger ones can be thought of as programming. In
this sense, the trend toward patch-based software can be seen as shifting the level on
which the user programs the computer away from the C or Lisp code itself and into the
“language” of patches. It may be that this is fundamentally a better level at which to
operate a computer, than either that of code or that of the user of a large, monolithic
program such as a database application.

Art music and the computer

In Europe and its former colonies such as the U.S., composers, since early in the twenti-
eth century, have paid much attention to problems of symbol manipulation and combi-
natorics. This idea found an early expression in Shoenberg’s 12-tone harmony, continued
through the serialism typified by Webern and later Boulez, and may have culminated in
the various mathematics-inspired approaches of Xenakis. The affinity of composers such
as Xenakis and Koenig for computers seems to grow naturally from their symbol-based
and/or quantitative approaches to musical composition.
It is no accident that computers were used in experimental classical composition,
whereas more tradition-bound musics such as Jazz stayed far away from the computer
room. And researchers in CAC repaid the compliment by paying close attention to
sets and permutations, and less so to melodic contour, for example. To this day, the
field of CAC looks primarily to the classical ‘art music’ tradition as a source of working
assumptions and problems.
Since computers are well adapted to symbolic and quantitative manipulation, it is
not surprising that ‘art’ composers have often turned to the computer, sometimes merely
for assistance, and sometimes for inspiration. The strange field called ‘computer science’
(which has little to do with writing or using computer programs) is often invoked as well.
Certain metaphors from computer science, such as hierarchies and networks, machine
learning, and database operations, often can be explicitly mapped to musical processes,
and this is useful to some of the more formally procedural composers.
I think these formalistic tendencies are now giving way to a more intuitive approach
among ‘art’ composers. Whether or not that is true, there is certainly more crosstalk
today between ‘art’ composers and musicians of other idioms. This is reflected in a
general movement in CAC away from formal and mathematical tools, in favor of more
intimate modes of interaction with the computer, even up to direct manipulation of data
structures by composers. So for instance when early CAC researchers wrote computer
programs whose output might be an entire piece of music, today we see developments such
as Open Music which, in their graphical orientation and patching metaphor, encourage
the composer to proceed by experimentation and intuition instead of by formal planning
and specification.
xii

The field of CAC in general is moving away from mathematical and computer science
constructs, and toward a more useful and powerful working relationship with the rest of
the composition process. A greater fluidity of interchange between the problem-solving or
material-generating functionality of a program such as OM, and the higher-level, partly
intuitive thought processes that must reside in the human brain makes the entire field of
CAC more accessible and more widely useful than ever before.

Software and Computer Aided Composition

In CAC, the variety of approaches and the flexibility of applications have grown as time
has passed. Back when computer programs used stacks of cards as input and output, it
was natural to think of “composition” as an atomic computer job: in go the program and
some parameters, and out comes music. As computing became interactive, a much more
powerful mode of working emerged, in which the computer might be called on hundreds
or thousands of times to solve specific problems, such as voicing a chord or quantizing a
rhythm.
Lisp, which is widely used in AI circles, was an early favorite among CAC researchers.
In return, the AI community, especially around MIT and Stanford, has long taken a
strong interest in music. The history of CAC software is dominated by large Lisp-based
systems. Perhaps the most important advance in the field of CAC was Patchwork by
Mikhael Laurson, a direct ancestor of OM. Patchwork (which Laurson still develops)
presents the user with a patching GUI, in which the semantic is that each object, to
produce its output, asks the objects upstream of it to compute their outputs, recursively.
This demand-driven dataflow model is also used in OM, although the nature of the
function calls has been greatly generalized compared to those of Patchwork.
Central to the success of both Patchwork and OM is the presence in each of tightly
integrated music notation display packages. A transparent connection is maintained
between the displayed score and the (user-accessible) data underneath, allowing for easy
transitions between the two media. This greatly enhances the ability of the user to tightly
integrate the algorithmic part of the work (on the data structures) with the intuitive
aspect (in the composer’s mind, transmitted via the notation). Initially, the notation
GUI functions as an invitation to composers to try the software. After the composer is
attracted, the notation package serves as his or her personal interpreter to the language
of Lisp.
That the developers of Patchwork and OM have actively sought to involve composers
in the earliest stages of the design of the software is itself another decisive reason for their
success. Few centers, anywhere in the world, have ever managed to match IRCAM’s
simultaneous ability to attract world-class music production projects and to support
researchers in the field of computer music. Even though state support for IRCAM has
eroded since the golden age of the 4X and the ISPW, the creators of OM maintain this
spirit, of which the present book is an important manifestation.
OM is almost certainly now the world’s dominant platform for doing CAC research
and practice, despite the presence of several other approaches (including one, by Karl-
heinz Essl, that runs within Max). Is this because OM’s design is the best, or is it that
OM has benefitted from the presence at IRCAM of so many willing composers, such as
the ones represented in this book? The two rival explanations are impossible to extricate
from one another.
xiii

This doesn’t imply that all interesting research in CAC is being done in OM. David
Cope’s work seems to me the most interesting CAC research from a theoretical stand-
point. His particular software solutions belong to the class of “automatic composition”
programs and are hence less adaptable to the needs of the main body of composers today
than systems such as OM. I hope someday to see his ideas brought out in more modular
form.

Promising areas of current and future research

An excellent trend is underway, and has been for at least several years, in that composers
of computer music today no longer immerse themselves in one primary software package
to realize works. The possibility of passing between one world and another (such as Max
and OM, for example) would not have occurred to many researchers or composers in the
days when mastery of any one idiom could take years of study and work. But as computer
music software in general has become more open and more modular, the opportunities
for interchange of data have increased, and at the same time the initial cost (primarily in
time) of using a new software package has gone down. Many new and interesting sparks
should fly from the collisions between the vastly different software packages that now can
be brought together.
Related to this, perhaps even a case of the trend toward interoperation, is the growing
involvement of CAC in manipulating sounds directly (not through the mediation of a
score). Such work lies simultaneously within CGM and CAC, and in one possible future
the distinction between the two will simply disappear. This is in high contrast to the
early days of CAC in which the output was a stack of punched cards, or even to the
situation only ten years ago, in which Patchwork users needed MIDI hardware to hear
their musical ideas. (CGM people like me scoff at the practice of using MIDI to synthesize
music.) The world of sounds is much richer than the world of musical note heads and
stems. The latter will always be a useful organizing and mnemonic device, but the former
is what actual music is made of.
In more general terms, I look forward to an increased concern in CAC about continu-
ously variable quantities, such as parameters of analysis or specification of sound (as well
as the function of time that represents a recorded sound itself). In the future, computer
music in general will have to deal with high-dimensional objects such as would specify a
timbre. The dimensionality of a set such as the “set of all possible sounds” is probably
not even well defined. New techniques will be needed to describe and manipulate such
quantities.
I’m also very excited about the recent work in OM on generalizing constraint problems
to optimization problems. The cool thing about treating compositional problems as
optimizations instead of constraints is that you can juggle the weights of the terms of the
function and watch, incrementally, as the machine tries to optimize the ever-changing
function. This can be used to find solutions to standard constraint problems, by adding
and dropping component constraints and watching how the solution changes. It’s almost
never interesting to see ALL the solutions of a constraint problem anyway; most of the
time there’s either no solution or else there are lots that you’d be equally happy with.
Next, I would like to see some of the techniques now only available in OM become
usable some day within real-time environments. This is clearly a huge undertaking, since
the style of programming currently used in real-time applications is so different from that
xiv

in OM. But there would be much gained if this became possible. In the meantime it’s
possible to send messages back and forth between OM and some lower-latency process
that takes care of real-time performance. But in the ideal, the connection between the
real-time and the compositional environments would be much more based on sharing
data and functions, rather than just communication protocols.
Another word for real-time composition is “improvisation”, and in this view George
Lewis was doing CAC research when he developed Voyager in the 1980s, and Salvatore
Martirano was apparently also doing CAC with the SalMar in the 1970s. Improvisation is
not only important in its own right, but also as the primary means by which composers
and performers have extended instrumental language, probably long before music wes
ever written down. Improvising with computers will lead us to a greater mastery of the
computer as a musical instrument in composed settings as well as improvised ones, and
I think CAC will play an important role in this development.
One more area. I’ve mentioned David Cope’s automatic composition project, and
although I can’t say I understand his methods very well, it’s clear that his use of natural
language recognition tools (in particular, “augmented networks” has somehow captured
something essential in the way classical musical forms work. No other research I’m aware
of has ever gained any real purchase on the questions of how musical motion, tension and
resolution, and musical themes work; and no other software algorithms I have seen can
make music that develops (in the classical sense) over the time scale of an entire piece in
the way Cope’s can. There is clearly something vitally important in this work, and other
researchers (myself not excepted) should be making a greater effort to learn its lessons.

Read this book


Perhaps I have named the frontiers well and perhaps not (only time will tell), but at least
as far as the present is concerned, this volume contains the best summary of current work
in the field that I know of. That each chapter of the book concerns the composition of a
real piece of music reassures us that the methods described here can indeed be brought
to musical fruition. And while, by the design of the book, composers using software other
than OM aren’t represented here, perhaps two thirds or three quarters of the entire field
creeps in at one spot or another. OM has established itself as the most important locus
of convergence of researchers and composers working on, or in, CAC; and, well, here they
are.
This book should prove useful not only to those wishing to learn how to use OM
in its present state (at least at the moment before OM develops further or is replaced
by something else) but also in the longer term as a repository of ideas, many of them
having roots in the past, even in non-computer-music, and many of which will reappear
in different musical and software contexts in the future. This is how music works, after
all—a musical idea is important in the way it speaks to the rest of the world of musical
ideas. It’s not the individual notes that count: it’s their interrelationships.

Miller Puckette
February 20, 2006
Introduction
This book brings together accounts about the way various composers approach the com-
puter when composing. More specifically, each chapter of this book gives an example of
computer-aided composition (CAC). While it is true that the composers presented here
have widely differing styles, they share a common denominator: they all use OpenMusic
software when composing. OpenMusic is a visual programming language for computer-
aided composition designed and developed by the Musical Representation team at Ircam.
It was born out of years of research and experiment in the use of symbolic representations
of musical structures and their application in composition.
After all these years of practice, we would like to sum up the current situation in
computer-aided composition. However, rather than publish a series of theoretical texts,
we thought it would be preferable to let the composers speak for themselves. Each
article included in this book describes one or more pieces that have already been played.
Consequently, the book will talk about musical results rather than experiments.
Using the description of computer programs for composition, the reader can focus his
or her attention on various questions: music formalization; the specification and organi-
sation of musical material (structure and form); external causes (perceptual, speculative,
aesthetic etc). This book will make it possible to compare a large number of different
approaches, types of artistic development, and above all composition styles. Moreover,
it will allow us to hear the viewpoint of a valuable number of composers concerning the
potential of the computer in composing a musical work.
The ways in which each composer approaches the computer are unique, on several
levels. First of all, in their degree of mastery of the computer: there are composers for
whom the computer is a kind of magic hat: they can pull anything out of it. At the
opposite end of the scale are composers who have a reasonable understanding of what can
and must be done, and what belongs more to the realm of science-fiction. Secondly, there
are those who never discuss the musical relevance of the computer, and those who have
totally integrated the computer into their aesthetics and creative strategy. Concerning
CAC, the huge diversity of aesthetic models cohabiting among today’s composers makes
it almost seem as if there has to be a computer system for each composer. Starting from
these special relationships between composers and the computer, the aim of this book
therefore is to serve as a source of reflection about computer-aided composition.
The practice of computer-aided composition is both old and new. In 1957, L. Hiller
along with L. Issacson and R. Baker created the first piece of music ”composed” on
computer: Illiac Suite for String Quartet. Shortly before that, at the end of the Fifties,
the invention of the transistor led to the second generation of computers, the direct
ascendants of the computers of today. With a little boldness, we could explain that
precocious birth of CAC by saying that it represented a big challenge for the artificial
intelligence of the time. If we twist Minsky’s definition of AI somewhat, we could imagine
computer-aided composition at that time as follows:

Making computer programs do musical composition, that is a task better car-


ried out by human composers, because it demands very high-level mental pro-

1
Introduction

cesses such as perceptual learning, memory organisation and critical reason-


ing.
This view of CAC might be fascinating to the computer scientist, but would not at all
be met with approval by composers, for reasons that are obvious. Those early experiences
— which can be summed up in three stages: selection of the material, random combi-
nation, and then selection of the result — have no aesthetic value for most composers
today. Cooperation between computer scientists and composers has gone far beyond a
state of affairs where the programmer simply imposes computer tools on the composer, or
where the programmer simply carries out the composer’s instructions. There is no sense
in considering the composer as an end user; firstly, ease of use and timesaving are not
his main concerns, and secondly, it is difficult or impossible for the composer to entrust
his creative action to somebody else. In our approach to computer-aided composition,
we do not see the composer or the music as something to be modelled, but rather we
instigate and then study compositions in an effort to gain a better understanding of the
role of computer processing and concepts in music. Thus we are proposing a shift in the
problem/solution paradigm, so beloved of programmers, towards an idea/exploration ap-
proach, in which computer tools will make it possible to either carry out or to modify
the composer’s intentions, depending on the possibilities open to him.
”Modern” computer-aided composition, in the form in which it began to emerge in
the Eighties, was influenced by various factors: the availability of personal computers;
technical progress in graphic user interfaces; emerging standards such as MIDI in 1983;
and technical progress in programming languages, in our view the most single important
factor. Using a programming language means that the composer has to think deeply
about the very process of formalisation. It also means that he is not working with some
black box offering only a limited number of choices. Programming languages open up
an enormous range of possibilities to the composer – providing he is willing to make the
effort in formulating and designing his project.
However, it should be noted that certain computer practices do not count for much in
composition. They are merely tools that do not greatly influence the composer’s aesthet-
ics. By contrast, there are others that, with a little boldness, allow us to learn something
of the role of programming in composing. Let us imagine a composer-programmer, for
whom the computer is an absolutely necessary ingredient when composing. Naturally,
this does not have to be the case: there are people who can compose without using a
computer. But for those who do use a computer, a programming language for composers
is only the first step towards a ”more intelligent” way of using the computer in musical
creation. In other words, using a programming language in no way guarantees a quality
result. And finally, it is not easy to settle on a relevant aspect of composition in order to
formalise it in computer terms. Experience has tended to show that tasks which appear
elementary to the composer can be extremely complex in computer terms, impossible
even. Similarly, information complexity is not at all the same thing as musical complex-
ity or richness. Using computer aided composition software for simple and repetitive
tasks is not an aspect that should be neglected.
An excellent example of this way of using the computer can be found in Lemou-
ton’s article “Generating melodic, harmonic and rhythmic processes in K..., an opera
by Philippe Manoury”. Philippe Manoury is well known in the computer music domain
for his innovations and his research regarding real-time processes, but he also uses post-
serial techniques in his writing. Some of those post-serial rules can easily be automated.

2
The very simple OpenMusic patches shown in the article were used to generate musical
material which was then printed in several bound notebooks, which served as melody,
chord and rhythm ”reservoirs” into which the composer was able to dip while writing the
score. Various other perhaps more important ways in which the computer was used in
composing the opera, concerning synthesis, spatializing, as well as real-time systems, are
not dealt with in the present article.
Computer music software, in the way we conceive it, can in no way handle all as-
pects of a composition. In most cases, OpenMusic is used for some special aspects of
the composition (rhythmical, melodic, harmonic etc). Similarly, the results proposed
by the computer are often corrected or modified by hand, when they are not rejected
outright. Levy in his article “When the computer enables freedom from the machine
(on an outline of the work Hérédo-Ribotes)”, puts forward musical situations, illustrated
with examples, in which, in his view, OpenMusic is relevant (calculating ”prime spectra”,
”iterated harmonic transposition”, Tanner vectorial representation etc). In dealing with
other tasks, by contrast, e.g. quantization, the composer prefers to take advantage of
the slowness of pen and ink and manual calculation, even though there is a programme
which would render the task less laborious. In this article, the author deals with two
aspects of OpenMusic. The first is epistemological, with a view to demonstrating that
computer music techniques, far from getting the composer locked into considerations of
an exclusively technical nature, allow him to set himself free of them. The second as-
pect is of an aesthetic nature, where Fabien Levy attempts a succinct description of the
artistic thinking behind computer procedures used in his composition.
Generally speaking, OpenMusic was designed to enable the composer to set up the
programmes needed in preparing complex material, structured by a set of rules expressed
in a coherent way. We approach the computer-aided composition from three angles:
formalisation, modelling and experimentation.
Musical formalisation is the phase that precedes the conception and the writing of one
or more pieces. It serves as a musical language for the composer. In our view, we can ap-
proach the source that is to be formalised in two ways: the first is based entirely on purely
musical issues, and serves to decide which computer tools will be the most suitable for
a given task; the other approach consists in transferring information or knowledge from
an extramusical domain (which may be of a scientific, social, religious, literary or other
nature) towards the musical domain. As for the relevance of a given formalisation ap-
proach, this will depend in the first instance on the musical issues requiring formalisation.
In the second instance, it will depend upon the correspondences that exist between the
extramusical entities and sufficiently significant musical structures. It should be noted
that formal logic does not necessarily imply the use or the presence of musical coherence.
If a correspondence of this type is to be used in a musical composition, it needs to be
supported via musical conceptualisation and thought.
An example of the first type of formalisation is to be found in the article “Timesculpt
in OpenMusic”. In this text, Karim Haddad’s chief preoccupation is musical time. It
contains various compositional strategies for manipulating duration. Haddad introduces
the notion of a ”time block” as a symbolic representation for a duration. Time blocks
are displayed as bars, but they only exist during the composition phase. The composer
has built up a time block algebra based on tree structures, using OpenMusic rhythmic
symbolic representations (he also contributed considerably to its development). Various
operations (rotations, homotheties/scalings, inversions, filtering etc) are applied to the

3
Introduction

time blocks in order to generate the rhythmic material of his composition. Thanks to
this formalisation, Karim Haddad can create time blocks that are manually constructed
or generated by sound analyses. Each of the selected results is then simplified into
traditional music notation for performance.
A formalisation example taking as its source an extramusical domain is to be found
in “Navigation of Structured Material in Second Horizon for Piano and Orchestra”. In
order to create organic musical gestures in this piece, Kretz bases it on the simulation
of physical movement. Using MAX software, he formalised the movement of a ball in
a closed room, subject to factors such as gravity and friction. Various trajectories were
then imported into OpenMusic to be used as writing parameters. The translation of
spatial positions into the harmonic domain did not yield satisfactory results as far as the
composer was concerned; however the various spatial envelopes did influence the final
form of the work. Johannes Kretz makes particular use of techniques resulting from
constraint programming in order to create musical structures using musical gestures as
their source, in turn derived from the movements of a ball. His domain of variables is a
pool of chords generated by an imposed 12-tone series. Another constraint method, this
time applied to orchestration, opens up unexplored and promising fields of research.
Formalisation generated by mathematics is more difficult to classify than the two
preceding categories. The question of whether music is a mathematical discipline is not
dealt with directly in this book. Nevertheless, it cannot be denied that numbers are
an important source of inspiration in composition. The golden mean, the Fibonacci
series, or magical squares are all frequently used examples. In “Writing a Homage to
Marin Mersenne: Tombeau de Marin Mersenne for Theorbo and Synthesiser”, the com-
poser Michel Amoric uses OpenMusic to calculate global spectral analyses of instrumen-
tal sounds, based on Mersenne numbers. Other composition techniques round out this
homage to Mersenne: the use of short extracts or fragments from 18th century pieces,
and the implementation, in the form of constraints, of rules proposed by Mersenne in his
book Harmonie Universelle.
In his article “Fractals and Writing”, Malt demonstrates how he applies the mathe-
matical notion of fractals when constructing a ”self-similar” form that serves as an overall
guide to his piece Six Fractal Contemplations. This is made up of six short pieces for
solo instruments (trumpet, flute, viola, voice, guitar and clarinet) and CD. According to
the performance order of each piece, the composer sets up a curve based on the evolution
of each instrument’s register centres. This curve shows a global aspect of the work, but
it also determines the way each piece evolves. This is where the notion of self-similarity
comes in: the evolution of each piece is the same as the evolution of the sequence of the
six pieces. For this purpose, Mikhail Malt used ”fractal proliferation” via IFS (Iterated
Function System), a collection of functions that he developed in OpenMusic for the ma-
nipulation and the creation of recursive linear systems. This system most notably makes
it possible to build fractal objects. The IFS system is described in detail in this article,
as well as its underlying mathematical theory.
Concerning modelling, the composer has to create a formal device which, by providing
at least a partial description of the characteristics of a composition process, makes it
possible to simulate the compositional process in an experimental way for purposes of
verifying, observing or even generating similar processes. In OpenMusic, the modelling

4
result of a piece is represented by a computer program component known as a patch 1
The various articles in this book use two main modelling types, known as constructive
and declarative.
The constructive approach goes via an algorithm or a programme that specifies the
stages to be followed in constructing a musical piece. Usually, these programmes display
several parameters or entry fields which the composer may vary with each execution.
OpenMusic is essentially a functional language, and can be seen as a function library for
implementing musical operations. Functional programming appears to be well suited to
composers. It is a programming style that answers the need to set up multidimensional
objects, and to apply operations for purposes of modifying them, or creating new objects.
In OpenMusic, there are functions that are dedicated to intervals; to harmonic processing;
to chord, motif and more complex musical object interpolation. The functional approach
appears to be particularly relevant to composers who attach great importance to the
interaction between musical material and the processes that modify that material.
In her article “Sculpted Implosions : Some Algorithms in a Waterspace of Musique
Concrète”, Nez describes, in detail, compositional techniques that she uses in controlling
pitch and rhythm. She describes various pitch control techniques, among them the mor-
phing of a note’s harmonic series; contour formation for a chord sequence, and the various
types of interpolation between two chords. In the rhythmic domain, she explains the var-
ious algorithms for constructing time curves, and even more interestingly, the composer
explains how she combines these time curves with the generated chord sequences. These
different OpenMusic techniques are already an integral part of Ketty Nez’s instrumental
writing. The fact that they were implemented into OpenMusic made it possible for the
composer to explore new creative fields. For example, the harmonic and rhythmic results
were converted into filtering parameters which were then applied to prerecorded sounds.
Another example of material transformation processing can be found in “Kaija Saari-
aho’s L’Amour de loin”. Nouno and Battier summarize the process as follows: the sound
material is made up of sampled spoken voice extracts from the libretto, as well as in-
strumental sounds. These samples are analysed using resonance model techniques. The
filtering phase consists of combining electronic components generated by a predefined
harmonic structure with a timbre generated by resonance model analysis. The electronic
part is first written in the form of a score. The electronic sounds are based on harmonic
structures specific to each character in the opera. Finally, the sounds that thus created
are mixed in real-time using the spatializer developed at the Ircam.
In the declarative approach, instead of the composer providing a recipe for the con-
struction of a musical entity, he defines the principal features of the required object.
Then, by means of a combinatorial search mechanism, the computer proposes one or sev-
eral objects that satisfy the specified requirements. Constraint programming is the most
commonly used computer-aided composition programming paradigm in this approach.
A constraint is quite simply a logical relationship concerning a set of variables (unknown
problem variables in a given domain). It provides partial information about permitted
values, thus ”constraining” the values within a certain range. Together, variables, do-

1 In most of the articles, the patch figures accompany the composer’s discourse. Understanding a

patch by looking at an image is not an easy task. It is not an absolute must in order to understand the
articles, however any reader not familiar with OpenMusic and who wishes to gain a rough idea of the
language can refer to the appendix at the end of this book as well as the recommended reading.

5
Introduction

mains and constraints make up a CSP (Constraint Satisfaction Problem). Once the CSP
has been written in OpenMusic, it is passed on to a solver, which finds the values that
satisfy the constraints to be applied in a musical structure.
In “Kalejdoskop for clarinet, viola and piano” by Örjan Sandred, the rhythmical con-
straints are used in two ways: either the rhythmical entities are the result of a CSP, or
they are the words of a constraint problem that concatenates them according to certain
rules. The first two sections of the piece are three voice containing rhythmic structures.
The three voices are linked by note attack constraints. Each note attack in the first voice
in particular has its equivalent in the second voice. The same ”inclusion” constraint is im-
posed on the second and third voices. Other constraint types apply globally, for example,
in a given voice, short notes are favoured in the first bars whereas long notes are favoured
in the closing bars, to create a ritardando feeling. In this article, the reader will discover
a great variety of rhythmical and harmonic constraints, but even more importantly, the
composer lays out his own viewpoint as to the musical relevance of these rules.
In “The Genesis of Mauro Lanza’s Aschenblume and the Role of Computer Aided
Composition Software in the Formalisation of Musical Process”, Juan Camilo Hernandez
demonstrates an analogous use of constraints. One of the main themes of his article is
a hierarchical rhythmical language that generates rhythmic motifs. Using the division
modulo of a rhythmic motif, Mauro Lanza obtains sub-motifs that coincide with the
original motif. Note density in a sub-motif is controlled by a system of constraints. The
composer uses Mikael Laurson’s Pmc Engine library. The article also contains shrewd
and delicate analyses of other aspects: creating a harmonic field for purposes of making
the sound of the composition homogenous; using melodic envelopes to control sound
processing; time stretching of each of the work’s sections.
The use of a given modelling type, constructive or declarative, can give rise to expres-
sions in one form that will not be favoured in the other. The choice of one or the other
form will therefore depend on how easily the model may be adapted to a given musical
situation. Nevertheless, within a single composition, it may be necessary to use both
approaches. In “Flexible Time Flow, Set-Theory and Constraints”, Kilian Sprotte uses
both functional programming and constraint programming in order to create rhythmic
and harmonic musical structures. Generally speaking, his approach here consists in defin-
ing algorithms so as to generate a set of musical objects (constructive approach), as well
as selecting and organising a subset that satisfies certain rules or constraints (declarative
approach).
Models can be a source of inspiration to the composer. In “Vuza canons into the
museum”, Georges Bloch takes as his basis a model proposed in OpenMusic for the
construction of “regular complementary rhythmic canons of maximal category”. Although
there is an algorithm to calculate musical objects, it should not be thought that the
composer will simply use it systematically on a ”no questions asked” basis. Georges
Bloch’s in-depth study of these canonic structures revealed features of certain objects
that, in the model, had remained hidden. For example, by coupling certain voices, he
obtained canons that maintain regularity and complementarity properties. Moreover,
Georges Bloch proposed operations that occur between canons, for example, modulation
between two canons. Even though the operation gives a result that goes beyond the
bounds of the model’s properties, from a perceptual viewpoint, the structure thus created
is in tune with the composer’s intention. In the composition’s final canon, Georges Bloch
employs harmonic constraints that destroy the perception of the canon form and so create

6
the feeling of a texture.
One of the principal functions of a model lies in the way it can be used to formulate
hypotheses and then simulate them. In this context, experimentation becomes absolutely
essential. Once the model is implemented, it is unlikely that the composer will remain
within the bounds of a given set of parameters. In contrast to other approaches (algorith-
mic music, for example), in the way we deal with computer-aided composition, material
generated by processing will be reworked by the composer according to other criteria,
most of them aesthetic. The composer does not know beforehand what the final result
will be like. Material generated by a model is considered to be potential material only,
and it is up to the composer to decide where to stop the model. A model generates a fam-
ily of results that can be progressively extended; it can thus be viewed as an equivalence
class of compositions. In this context, two identical results that do not come from the
same model are not equal, because they do not have the same evolutionary possibilities.
The score describes the work in notes, but it is the model that gives it meaning. Among
other things, it determines whether a trend will move towards or away from complexity,
as well as the specificity of the work’s constructions. To take an example on the level of
form, there are compositions that start out from the global and then ”fill-in” on the local
scale, or vice versa, there are pieces in which it is the assemblage of the local sections
that will bring the whole into the light.
In A Collection of Caprices, Paul Nauert uses a strategy that he calls ”top-down”. The
composer decides that his piece will comprise 11 sections, with long and short sections
alternating. Then he composes them. The OMTimepack library, implemented by the
composer, provides tools for generating durations according to a specific statistical profile.
Once the rhythmic sketch of the piece has been completed, the composer inserts the
pitches. To do this, he uses pitch equivalence classes that create a uniformity dynamic
within each section, while creating contrasts between different sections.
The opposite approach, ”Down-top”, is to be found in the article “Composing the
qualitative, on Encore’s composition” by Jean-Luc Hervé and Frédéric Voisin. Encore
is a piece for 18 musicians, electronics and two Disklaviers. The MIDI Disklavier scores
were created using OpenMusic. An important phase was the realisation of simple compo-
sition gesture ”prototypes”: mounting runs, descending runs and repeated notes. These
elementary gestures were then given reciprocal articulations, and placed in the OpenMu-
sic maquette editor (see appendix). In this way, changing a gesture inside a maquette
can have an impact on the way other gestures are updated, both downstream and up-
stream. The entire cadenza is made up of a succession of 10 boxes corresponding to the
10 phrases of the cadenza. Each box is itself a maquette, containing the phrase’s entire
gesture sequence.
These sophisticated, high-level representations of musical structures and knowledge,
in conjunction with the OpenMusic computer language, have made it possible for us to
bring together in one book a broad community of composers as well as various original
forms of musical thought so characteristic of the information technology environment.
Reading through the various articles, it becomes obvious that the composer is able not
only to represent (and perform) the final score, but can also take care of all the vari-
ous levels of formal elaboration, and where applicable, algorithmic generators. Works
that are created in this way partly contain their own structural analysis. Such a sym-
bolic approach to computer-aided composition has undoubted relevance for the composer.
Among other things, modelling a piece makes it possible to do the following: substitute

7
Introduction

one compositional notion for another at varying levels of abstraction; reduce the amount
of information necessary for describing a musical object; have access to a veritable sym-
bolic function, i.e. one that can be read, written and memorised; take advantage of the
symbolic nature of information technology so as to be able to gain increased control over
the computer. Finally, this symbolic approach to computer-aided composition promotes
and stimulates relationships between music and other artistic domains. Music, painting
and poetry for example, are linked together in Hector Parra’s Strette. In his article, the
composer set up a direct link between the vocal component in his composition and the
features of Paul Celan’s poem Engführung. In a parallel way, links are proposed between
music and painting, taking as a motif the Château Noir by Cézanne. Various patches
are used in formalising the way Hector Parra designs and creates relationships between
colour, rhythm and pitch.
This computer formalisation of high-level musical structures (notes, chords, voices,
etc) may give the impression that the physical reality of music i.e. the sound, is neglected.
Indeed, the process of modelling sound phenomena using formal concepts that are on a
higher level of abstraction than are the signal models remains for the moment a utopian
project. What is even more problematical is the way those two levels of musical repre-
sentation, the signal and the symbolic, relate. This is a scientific obstacle that has yet to
be overcome. Nevertheless, this obstacle has not prevented composers from establishing
formal links between sound and the written symbol. In the present book are to be found
artistic propositions whose aim is to build a bridge between the two worlds, one that will
allow a two-way exchange: extracting information from signals, and controlling sound
signal synthesis.
Klangspiegel by Luis Antunes Pena is a composition based on the manipulation of
data generated by sound analysis. The data are transformed in OpenMusic and is then
synthesised, producing a ”virtual image” of the original sound. The analysis is also used
in another way, on the macro time level. By means of the combinatory power of the
computer, the data generated by analysis are made to respond to symbol manipulation
(for the most part, in and between the rhythmic and harmonic domains). The results
thus obtained, synthesis of the sound and instrumental writing, are considered by the
composer to be a dual representation of the original sound. Confronting these two images
or representations is the composer’s chief preoccupation in this work.
In “Some applications of OpenMusic in connection with the program Modalys”, Paola
Livorsi explains how she uses OpenMusic to control sound synthesis parameters that are
then employed by the program Modalys. Multiple synthesis parameters are prepared in
OpenMusic, among them access envelopes, and sound and resonance duration values.
Apart from performing a task that would be difficult and imprecise if done by hand,
OpenMusic made it possible to control the parameters in a dynamic way, throughout
the sound’s entire duration. Paola Livorsi also introduces random variations in order to
enrich synthesis of the sounds.
There are numerous instances in the book where composers have had to ”program”
their own composition tools for personal use. The programmes thus created have been
integrated into OpenMusic in the form of libraries. So a library can be said to be an open
version of what was originally a personal composition tool. In her article “To Touch the
Inner Sound, Before it Becomes Music; to Dream About Dreams, Before they Become
Real” Elaine Thomazi-Freitas explains how she used the om-tristan library (created by
the composer Tristan Murail) in the frequency domain. Using spectral analysis data,

8
Elaine Thomazi-Freitas extracts rhythmic and harmonic figures and manipulates them
in order to generate the musical content of her composition Derrière la Pensée.
Thus, OpenMusic is software in continuous development. Day by day, it becomes
richer thanks to the special programs created by composers. Can it be said that software
brings into being composer-programmers, or, rather that programming is a necessary
and unavoidable practice when using the computer to create music? The first of those
two notions seems far too pretentious to us, and so we prefer to believe in the second.
However, in compiling the present texts, we do not lay claim to demonstrating such a
hypothesis. The idea of the present book is to constitute a source of reflection, not from
one but from several viewpoints.
As far as music is concerned, the texts may serve as a potential tool for any composer
looking to explore the many links between compositional thought and use of the com-
puter. Moreover, in the hands of a ”working musicologist” the computer tools described
in the book may become the starting point for new directions in systematic musicology,
i.e. for purposes of proposing and studying formal models that contain clues to the cre-
ative processes leading up to a work of music. Last but not least, this research could also
be useful to the community of music lovers interested in gaining a better understanding
of certain aspects of current musical exploration.
From the viewpoint of computer science, this book can be viewed as a highly practi-
cal research tool. Using the developments presented here, the computer scientist could
extrapolate certain ideas to other activities and fields such as AI or Human-Computer
Interaction, in which programming languages are used. It is our hope that this type of
study might make it possible to better grasp certain concepts in computer science, and
thus be able to assess their expressive potential.
The present book is a first step towards an approach to current study of computer
aided composition. It will be followed by other volumes, always with a view to showing
a living and vibrant domain that remains as yet largely unexplored.
We cannot individually thank all the people who took part in preparing this project
for publication, but we hope that each and everyone will consider this as an expression
of our profoundest gratitude.

Carlos Agon

9
Writing a Homage to Marin
Mersenne: Tombeau de Marin
Mersenne for Theorbo and
Synthesiser (General Midi)
- Michel Amoric -

Abstract. This piece is a homage to Marin Mersenne fascination with the prob-
lems of consonance, acoustics or organology, that are of particular interest to us today.
OM was crucial in writing this Tombeau de Mersenne. In particular, OM was used to
calculate, from Mersenne numbers, the spectral analysis of instrumental samples, the
defragmentation of old scores or the implementation of rules presented in Harmonie Uni-
verselle. Once the OM score realised, I separated the lute part from that of the synthesiser.

***

1 Introduction
Medieval Déplorations, Dump, and Motets, Renaissance Lachrimæ, Funeral Tears, and
Laments, or the Baroque Tombeaux 1 . . . , were all forms that composers adopted to com-
memorate both their peers and their patrons.

Figure 1. Marin Mersenne, Oizé 1588 – Paris 1648 (print room, BNF)

1 It is important to notice during this period that the tombeaux, written for the lute, theorbo, harp-

sichord or viol, represented the quintessence of music of the time, particularly in 17th century France.

11
Michel Amoric

If this tradition was somewhat neglected during the 19th Century, it was once again
embraced by composers in the 20th century2 .
This Tombeau of Marin Mersenne, is an elegiac homage to one of the precursors to
the relationship between science and music.

2 INVENTIO ... Using OpenMusic


The material for this piece was specifically chosen for its correlation to Marin Mersenne’s
works on the question of music:
• Mersenne numbers, applied to certain musical parameters : a homage to Mersenne
the mathematician.
• Samples of instrumental sounds : a homage to Mersenne the organologist.
• Fragments of pieces from the 17th century, and rules on 17th century composition:
a homage to Mersenne the musicographer.
• Constraints in writing presented in Harmonie Universelle: a homage to Mersenne
the theoretician.

2.1 Mersenne numbers, (M = xn − 1)


Number systems have often been used by composers3 to code certain parameters of their
work in the hope that by resorting to mathematical abstraction they would break with
the mould of their time. (Assayag, 2000) As mysticism played a central role in developing
the relationship between figures and art, it seemed to me particularly appropriate to write
a Tombeau de Mersenne. Furthermore, I applied Mersenne numbers to the domains of
scale heights, height intervals and duration. Let us remember that the properties of
Mersenne numbers are prime when the exposit n is also prime (M = xn − 1). (The
reverse being untrue). Writing in base 2, is thus fairly simple. (There are only 1, using
a n) Example : 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281,
3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049,
216091, 756839, 859433, 1257787, 1398269, 2976221. . . )4 .

2 This can be seen as of 1917, with Maurice Ravel as well as G. Migot, M. Dupré, M. Delage, A.

Jolivet, M. Kagel, E. Satie, F. Schmitt, P. Dukas, B. Bartok, M. de Falla, M. Ohana, O. Messiaen, G.


Pierné, J. Rodrigo, Ropartz, F. Schmitt, G. Amy, A. Boucourechliev, M. Constant, B. Jolas, A. Louvier,
O. Messiaen, I. Xenakis, and let us not forget : I. Stravinsky (In memoriam Dylan Thomas 1954) or P.
Boulez (with his Rituel in memoriam Bruno Maderna, 1974, or Explosante fixe 1973 for the death of
Igor Stravinsky).
3 Various means were employed to objectify sacred numbers (1, 3, 7 . . . ) : geometric constructions

(symmetry, golden numbers. . . ), poetic or rhetorical models. . . this numerical identification was par-
ticularly useful in facilitating either the memorisation or recognition of works in the fashion of ”Oratio
Numerosa”.
4 It is easy to show that if the number n = 2p − 1 is prime, then p must also be prime. In 1644

Mersenne advanced that n is prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 et 257 but not prime for
the 44 other prime numbers inferior to 257. It was later proved that Mersenne was wrong, concerning 5
of the prime numbers inferior or equal to 257 (two were accepted that do not lead to a prime number
(67 et 257) and 3 were excluded that do : 61, 89 et 107). Let us note that 49 000 participants connected
onto the web in a world-wide competition to find new and even bigger Mersenne numbers.

12
Writing a Homage to Marin Mersenne...

Figure 2. Implementation using OM of Mersenne’s formula applied to calculating a chromatic


range mode

With Mersenne numbers, I was able to establish series of harmony, frequency and
duration. Produced by using OM’s Zn library, Mersenne numbers in modulo 12 provide
a mode of six intervals : 2, 1, 1, 2, 2, 4, that I treated exactly as they appear, or grouped
according to authentus and plagis modes.

2.2 Instrumetal sound samples

Analysed using the sonogram Audiosculpt, the data of the spectrum of lute sounds were
exported to OM by using the Repmus library. We were able to thus obtain patterns of 6
to 30 notes representing the acoustic behaviour of instrumental sounds. The first part to
the piece largely uses these patterns, in a superimposed, contracted or dilated fashion.

13
Michel Amoric

Figure 3. A circular representation of interval relations in a dodecaphonic range according to


M. Mersenne, in Harmonie Universelle, vol. II, livre second, des dissonances, p. 136

Figure 4. Usage of the Zn library to construct harmonies of which the height intervals were
chosen using Mersenne numbers

2.3 Selection of fragments and the total of height from a series


of 17th century pieces
Several tombeaux for a lute solo, by contemporaries of Marin Mersenne, were captured
in FINALE, then imported into OM.
These “quotation fragments” were directly reproduced or else underwent a rhythmical
and melodic hybridisation (by adopting the rhythm of one section and applying it to the
heights of another). In addition, the total heights and their statistical distribution taken

14
Writing a Homage to Marin Mersenne...

from the repertoire pieces were used as a “reservoir” for height in the second and third
parts.

Figure 5. Representation of the total heights of four « tombeaux » for the 17th century lute,
providing a statistical estimation

2.4 Various writing constraints, proposed by Mersenne in his


Harmonie Universelle
As all treatises of this period, Mersenne outlines in his Harmonie Universelle, (Mersenne,
1636) a succession of constraints such as:

• not repeating a same interval more than three times,

• the succession of two fifths or parallel octaves5 ,

• not using the melodic triton,

• favouring contrary movements,

• alternating consonance-dissonance6 ,

• privileging proximities7 ...

The Situation library enabled us to incorporate these constraints and apply them to
numerous passages in the second part of the piece, linked to Mersenne numbers.

3 DISPOSITIO . . . & OM models


The score of the piece is composed of 21 elements, divided into three parts. This division
echoes rhetorical literary or musical figures used in the 16th century.

5“One must not place two or several types of consonance in succession, particularly when they are

perfect, let the parts progress with similar movements, or changes of harmony.”
6“One must place an imperfect consonance during or after a perfect one where possible ; or if one

follows a set of two perfect consonance, they must different in type”.


7“One must pass wherever possible to the consonance of the nearest degree and consequently one must

pass from a minor third to a unison, and a major sixth to an octave; this can be heard in passages made
up of contrary movements”.

15
Michel Amoric

Figure 6. A patch that demonstrates Mersenne’s various writing constraints, applied to a


Mersenne numbers series

The first part of the piece is composed of nine sections, based on spectral analysis
as described above. These sections were then transposed, contracted, dilated isolated or
superimposed.
The second part includes pseudo-canons constructed using Mersenne numbers. A
section of periodic chords were devised from general harmonic fields of tombeaux.
The third part is characterised by an ostinato of periodic chords produced from a series
of Mersenne numbers and then controlled by algorithms of interpolation. These events,
based on fragments of 17th century texts, ornament the entire piece. Once created,
I divided the score into two, with one part devoted to Midi sounds and the other to
instrumental sounds (see figure 7).

4 ELOCUTIO . . . Using the OM model


The combination of the sound of the lute, both complex and versatile (which represents
the 18th century tradition and common taste, as can be imagined), with that of syn-

16
Writing a Homage to Marin Mersenne...

Figure 7. Maquette of the piece

thesised sounds “general midi”8 (symbolising technological expertise and today’s more
trivial taste) may be considered furthermore as a rhetorical subject.
In accordance with Charles Koechlin9 (Koechlin,1954), I think that it is important
to write for the lute as its repertoire is still somewhat limited. This can be explained by
the lack of knowledge regarding this instrument or the importance of its tradition; it is
high time that its repertoire grew in the same way as the harpsichord, another neglected
19th century instrument.
The pre-recorded part (eight tones were selected, on eight channels) was directly taken
from the OM model. For each channel, the chorus, panoramic and reverberation were
determined. The instrumental score, written using FINALE 2003, includes a sketch of
the pre-recorded part in solfegic notation (see figure 8).

5 Conclusions
This piece is a homage to “Minime Sarthois”’ fascination with the problems of consonance,
acoustics or organology, that are of particular interest to us today. OM was crucial
in writing this Tombeau de Mersenne. In particular, OM was used to calculate, from
Mersenne numbers, the spectral analysis of instrumental samples, the defragmentation
of old scores or the implementation of rules presented in Harmonie Universelle. Once the
OM score realised, I separated the lute part from that of the synthesiser. If Mersenne

8 Tones: 10,15, 113, 11, 13, 5, 61, 62.


9 ”Oneconstructs modern lutes on old models; and I would wish, as I so greatly wish for the harpsi-
chord, that this instrument could have a modern and original repertoire”. Instrumentation treatise.

17
Michel Amoric

Figure 8. First page of the score using Finale

thought that aesthetic questions varied with time10 , he knew that the question of science
and music remains. Therefore this Tombeau de Mersenne treats the subject beyond the
context of time, as described by Mersenne in the 17th century, with the aesthetics and
technologies of today.

10 ”Since the long exercise of rendering sweet and easy what appeared before to be unpolished and

disturbing, I do not doubt that the dissonant intervals, of which I have spoken in this proposition, to be
known from 7 to 6, & from 8 to 7, which divide the fourth, to become agreeable, if one is accustomed
to hearing and enduring them, & that one uses them as one must in recitals and concerts, in order
to stir passion, and for various effects, of which ordinary music is deprived.” M. Mersenne, Harmonie
universelle, vol. I Ch. Cons., p. 89.

18
Bibliography
[1] Assayag G. : De la calculabilité à l’implémentation musicale. Séminaire Entretemps-
IRCAM, Mathématiques - Musique-Philosophie, 7 October 2000.

[2] Bosseur J-Y. and Bosseur D. : Révolutions musicales, 4th edition. Minerve, Paris,
1993, 268 p.
[3] Boulez P.: La vestale et le voleur de feu. in InHarmoniques, no 4, « mémoire et
création », September 1988, pp. 8-11.

[4] Kœchlin C. : Traité d’Orchestration. Eschig, Tome I, p. 205, 1954.


[5] Malherbe C. : « L’enjeu spectral » Entre-temps, n◦ 8, Paris, 1991, pp. 7-26.
[6] Mersenne M. : Harmonie universelle. Paris, fac. sim. From the 1636 edition by F.
Lesure, Ed. CNRS, 3 tomes, Paris, 1963.

Michel Amoric
Michel Amoric holds a PhD in musicology, and lectures
at Paris IV University, as well as a PhD in Sciences, lec-
turing at Paris VII University.
He began by devoting himself to his career as a guitar and
lute performer, both as soloist and as a member of the
2E, 2M, Intercontemporain, and ItinEraire ensembles, as
well as the Radio France Philharmonic and National Or-
chestra, and the Orchestra of Paris. He participated in
the world premieres of more than 100 works, and the re-
discovery of hundreds of works from the 16th, 17th and
18th century repertoires. Michel Amoric is one of the very
few lute players to devote himself to the rebirth of music for the lute and the theorbo.

19
Electronics in Kaija Saariaho’s opera,
L’Amour de loin
- Marc Battier and Gilbert Nouno -

Abstract. This article describes, using OpenMusic, the different steps that are nec-
essary when creating resonant filters based on a combination of instrumental models and
chords that come from a specific piece. These filters are the foundation of the electronic
part of the opera.

***

1 L’Amour de loin in Kaija Saariaho’s work


Before composing her first opera L’Amour de loin, Kaija Saariaho wrote works that
prepared her to experiment with new approaches. Being always interested in discoveries,
the Finnish composer, after studying at the Sibelius Academy in Helsinki, attended the
Darmstadt courses, then studied with Brian Ferneyhough in Friburg. In 1982, she went
through a workshop at the IRCAM and was initiated in the most advanced computer
techniques at that time. She rapidly put these techniques into practice for the works
she wrote in Paris, where she remained in touch with this institute. Her works with
computers seem to be the natural continuation of the first works she started back in 1980
at the experimental studio of the Finnish radio in Helsinki. The resulting work, Study
for Life, is also a first step towards theatralizing a musical work, since it involves lighting
and gestures of the singer and dancer. One should also notice the composer’s affection
for voice and the instruments she most often calls upon: flute (Noanoa, 1992) and cello
(Petals, 1988; Près, 1992; Amers, 1992). This preference , along with work on voice and
song, is present in her work right from the early opuses in her catalogue.
During her stay in Paris, the composer started developing a refined and complex
style of work. The first work that shows how her conception of music is rooted in
the potential of computers is Vers le Blanc (1982), composed at the IRCAM. Assisted
by the voice synthesis program Chant (conceived by Xavier Rodet at the end of the
1970’s), Kaija Saariaho built her composition as being the transformation of a chord
into another. Two of her major and ongoing concerns appear here. The first one is the
gradual transformation of one type of material into another, most often by interpolating
a program’s parameters. The second is the use of sound analysis methods in order to
derive harmonic fields, achieving a transition from timbre to harmony1 .

1 About the issue of passing from timbre to harmony see: K. Saariaho, ”Timbre et harmonie”, in Le

Timbre, métaphore pour la composition, directed by J.-B. Barrière, Paris, Ircam and Christian Bourgois,
1991, p. 412-453.

21
Marc Battier and Gilbert Nouno

The composer had approached melodic writing in Chateau de l’Ame, a work composed
for the Salzburg Festival. Opera definitely enhances the width of melodic curves and adds
to it various modes of declamation, such as scansion. However, Kaija Saariaho remains
faithful to her post-spectral legacy: it can be clearly noticed in many instrumental parts
and appears in the hybrid writing that oscillates between melody and the search for links
between sound sources and harmony.
The opera is composed on a libretto2 written by Amin Maalouf, writer and winner of
the 1993 Goncourt prize with his novel Le rocher de Tanios. There are three characters
on stage: Jaufré Rudel, troubadour and prince of Blaye (baryton), Clémence, Countess
of Tripoli (soprano) and The Pilgrim (mezzo-soprano). The historical figure of Jaufré
Rudel, the troubadour, lives in the 12th century. The songs he composed for his amor
de lonh have been preserved. Rudel wrote several songs about the love at distance he
had for the Countess of Tripoli. Kaija Saariaho used one of these songs, Lanqan li jorn
son long en mai, in her work for soprano and electronics, Lonh (1996), written as if it
was a prologue to the opera to come, then used it again in the opera itself. This poem is
central to the troubadour’s work, and therefore in the vida, the biographic account that
was written about him in the 13th century. This anonymous vida established the legend
of the poet in love with a distant lady, his journey and his death in a country far from
home.
Finally, a choir handles several parts: the troubadour’s companions (acts I, IV and
V), and the Tripolitans’ Choir (acts III and V).
The libretto is written in modern French, except the lyrical parts where Clémence
sings 3 excerpts of Jaufré’s poem Lanqan li jorn son long en mai in old French : ”Ja mais
d’amor no-m gauzirai” (chant V), end of act II, ”Ben tenc lo Seignor per verai” (chant
II) and ”Ver ditz qui m’appela lechai” (chant VII), act III scene 2. The voice of Dawn
Upshaw, who also created the opera at Salzburg, is heard in the electronic part.

2 The electronic tools


The electronic sounds were made by Kaija Saariaho and Gilbert Nouno in the studios
of the IRCAM with two types of tool: OM, a language of symbolic manipulation (like
scores), and MAX/MSP, a program based on free binding of signal processing and real-
time sound synthesis.
Creating electronic sounds is a process that goes through several steps during which
the electronic tools are articulated. These steps are most often in the following order:

• sampling voices and natural sound sources (water, wind, birds),

• analysis of several instrumental samples by resonance models,

• creating resonance filters based on harmonic structures (with OM),

• filtering voices and natural sound sources with the above-mentionned filters (inter-
polating filters in the MAX/MSP environment),

• spacializing the various sound layers.

2 The libretto is available online at Shrimer’s site http://www.schirmer.com/amour/indexf.html.

22
Electronics in Kaija Saariaho’s opera, L’Amour de loin

With OM, the electronic part is first written in a form close to a score and in relation
to the orchestral and vocal score, in an abstract space of sounds whose sonorities are
only imagined at this stage. MAX/MSP is then used to make the sound synthesis from
the data defined in OM.

3 The sound materials


The original sound material includes samples of voices and instruments. The voices are
mainly speaking voices, declaiming text from the libretto.

3.1 Resonance models


Resonance models allow the analysis of an acoustic instrument’s timbre and its translation
into parameters. Resonance is used to collect these parameters, its classical definition
being the response of an instrument’s body to an excitation from a natural or an in-
strumental sound. The analysis consists in making successive temporal estimations of a
signal’s spectral envelope. A program refines this analysis, by keeping only the most sig-
nificant regions of the spectrum’s components. These are the resonances, characterized
by three parameters: frequency, amplitude and bandwidth.
One of the goals of resonance models is to constitute a platform between natural
and synthetic sounds. The distinction between excitation and resonance creates a rich
and fruitful experimental space for transforming analysed sounds. One of the ways of
obtaining sounds by the cross-synthesis method is therefore to replace the usual excitation
(impulse, white noise) by a sound sample: if a voice is used, the crossing will be between
this voice and the resonance given by the instrument’s model. This process of filtering a
sound source by a filter to which the parameters of a resonance model are attributed is
called source/filter synthesis.
The analysis by resonance models was made with the ResAN program, a component
of the Diphone software developed by the Analysis/Synthesis team at the IRCAM. The
following instruments were used to synthesise most of the opera’s sounds: bow-cymbal,
crash-cymbal, bell-tree, arco double-bass, pizz double-bass, glockenspiel, tam gong, harp,
marktree, piano, piano clusters, timpani of various sizes and tubular bells. We will see
further how these models were modified according to the score in order to act as extensions
of the orchestra.

3.2 Chords associated to characters


The electronic sounds are based on harmonic structures that are specific to each charac-
ter. The sounds diffused during Jaufré’s presence are built on chords that characterize
him. The same goes for Clémence and the Pilgrim when he joins Jaufré or Clémence.
The figures below show the chords used to elaborate the electronic sounds for the three
characters of the opera (in OM’s usual notation system two staves, sol and fa keys, have
been added, see figures 1 to 3).

23
Marc Battier and Gilbert Nouno

Figure 1. Jaufré: J-chord-a, J-chord-b, J-chord-c, J-chord-d, J-chord-e

Figure 2. Le Pèlerin: P-chord-1j, P-chord-2j, P-chord-3j, P-chord-1c, P-chord-2c, P-chord-3c

Figure 3. Clémence: C-chord-a, C-chord-b, C-chord-c, C-chord-d, C-chord-e, C-chord-f

3.3 Creating resonant filters by using chords and instrumental


models
The filters used for sound synthesis, created with OM, are designed to create a cross
between the desired harmonic structure (one of the chords shown above) and an instru-
mental timbre resulting from the analysis by resonance model. The first step consists in
analysing an instrumental sample with the Resan program. The example (figure 4) shows
a Bosendorfer piano sample, note fa]0. The analysis provides for each resonance its fre-

24
Electronics in Kaija Saariaho’s opera, L’Amour de loin

quency, amplitude and bandwidth. It is worthy of note that the weaker the bandwidth,
the longer the resonance. The OM patch illustrates the resonance model’s extraction.

Figure 4. Analysis of a Bosendorfer piano sample, note fa]0

In this example, Jaufré’s chord J-chord-a is used. After the spectral interpolation has
been calculated, the amplitude and bandwidth corresponding to the instrumental model
are associated to each frequency of the chord’s notes. It should be mentioned that the
filter can be enriched with partials (of n order, with n an integer) added to every note of
the chord. Depending on the degree to which it is enriched, there is an oscillation between
a filter containing only the chord’s frequencies and a filter that draws near the spectral
envelopes of the original instrumental sample. The OM patch, figure 5, illustrates how a
filter is generated.
The action of the synthesis filter can be described in several ways: filtering a chord
with an instrumental resonant filter, cribling or grid-screening of a resonant filter by
selecting some frequencies, or applying the spectral envelope of the analysed instrument
to a chord.

25
Marc Battier and Gilbert Nouno

Figure 5. Patch of filter generation

3.4 Filtering sound samples and interpolating filters


For each chord associated with the characters, many filters were created by crossing them
with instruments. About three hundred filters were generated for the entire work. Sound
synthesis was then made with the MAX/MSP program (figure 6). Samples of whispering
voices, sounds of birds, wind or elements of nature were used as sources and filtered by a
group of three filters that were mixed in realtime by the composer. By moving within a
triangle of which the corners represent the three filters, one can acheive a mix in which
each filter is balanced by the distance separating it from the point where one stands. By
looking at the number of chords and analysed instrumental samples by using the three
filters, it is easy to see how numerous are the possible combinations.

4 Mixing and spatializing models


In this opera, space is an important element for sound synthesis. The sounds created
previously were mixed in realtime with the spatializer developped at the IRCAM. The
specific version of tridimensional spatialization that was used allows virtual manipulation
via track-balls, to place sounds in the tridimensional space defined by the network of loud-
speakers. The network initially included eighteen loud-speakers to diffuse sounds. They
were brought down to eight to make technical settings easier. The diffusion device is a

26
Electronics in Kaija Saariaho’s opera, L’Amour de loin

Figure 6. The spatialization patch in MAX/msp including the three track-balls and spatilazers

Figure 7. Diagram of movements in the acoustic space

27
Marc Battier and Gilbert Nouno

parallellepiped in which the audience is immersed. The diagram drawn by the composer
(figure 7) shows the properties assigned to the sixteen sound files of the first act, including
movements in the acoustic space.
Listeners are immersed in a malleable acoustic space in which the sources filtered
by the resonance models move gently. Surrounded by loud-speakers, the audience is,
in a way, ”inside” the electronic sounds (which themselves come from natural sound
sources) and at the same time ”outside” as listeners. This perceptual ambivalence in
Kaija Saariaho’s writing is like the characters’ emotions, their minds turned outwards to
the ultra-mar, but inwards as well, peering deep into their most intimate feelings.

28
Marc Battier
Electroacoustic music composer. Musicology professor at
the university Sorbonne-Paris IV. Head of the MINT re-
search group. Co-founder of Electroacoustic Music Stud-
ies Network.

www.omf.paris4.sorbonne.fr/MINT

Gilbert Nouno
Born in 1970, he studied guitar and double bass
while pursuing an engineering degree. He then be-
came engaged in the performance of classical mu-
sic, jazz and improvisation. From this time, too,
begins his interest in the cross-relationships of mu-
sic, science, and technology, which led to an asso-
ciation with Ircam. He is currently a musical As-
sistant with Ircam, collaborating with composers in
realizing works involving the use of computer and
other recent technologies. Among these composers
are Michael Obst, Kaija Saariaho, Jose-Luis Cam-
pana, Philippe Schoeller, Michael Jarrell, Sandeep Bhagwati, Brian Ferneyhough,
Jonathan Harvey and the saxophonist Steve Coleman.

29
Vuza Canons into the Museum
- Georges Bloch -

Abstract. A remarkable feature of OpenMusic is, of course, the ability to ex-


periment with musical structures that, otherwise, would be impossible to construct by
hand. An example is “canons of maximal categories”, that is, non commonplace rhyth-
mical figures that, when played in canon, form a continuum without any voice playing
simultaneously with another. A mathematical theory of these canons has been developed
by the Rumanian mathematician Dan T. Vuza and expanded by Moreno Andreatta at
Ircam. This is a intuitively evident structure that has interested numerous composers,
among them Messiaen. But the construction of such a canon was beyond the means of
composers working without computers, and research on the characteristics of such objects
requires a musical representation tool such as OpenMusic.
Another possibility provided by OM stems from the fact it is an extension of Lisp. This
allows the use of specific research (in this case, the research on constraint programming
carried out by Charlotte Truchet) to be applied to musical structures.

***

1 Introduction
1.1 The Beyeler Project
In July 2001, a piece of mine was performed, in the framework of the “composers of the
week” series of the Europäischer Musikmonat. Entitled Fondation Beyeler: une empreinte
sonore, this project basically consisted of a musical visit of the collections of the Fondation
Beyeler. The music for this project made extensive use of the so-called “Vuza canons”.
Fondation Beyeler is a museum designed by architect Renzo Piano to house the private
collection of the art dealer Ernst Beyeler. It is located in Riehen, a small town east of
Basel (Switzerland). Containing mainly pictures and sculptures from the 20th century,
it is one of the most impressive private collections in the world. There are also special
exhibitions, for the most part these come from well-known relatively provocative periods
of modern art. The performance took place at the same time as a special exhibition
entitled Ornament and Abstraction. Organized by Markus Bruederlin, the Fondation
Beyeler curator, its main idea was that ornament was the “stowaway” of abstract art1 .
This special exhibition was divided into ten sections. In a Prologue, 18th Century
Moroccan windows were presented side by side with Rothko and Taafe paintings. Entitled
“In the Beginning was Ornament”, the first chapter displayed the long history of the line
(especially in the form of Arab Calligraphy) before it was used by abstract artists such

1 See the catalog for the exhibition: Ornament and Abstraction, Dumont / Fondation Beyeler, 2001,

which has been published in German and in English.

31
Georges Bloch

as Paul Klee. The next two chapters show how, in Munich and Vienna at the turn of
the 20th Century, Art Nouveau and Secession artists had already achieved a transition
from folk ornament or architectural shapes to Abstraction. The fifth chapter shows how
Abstraction eventually spread to the mural form, as in Matisse’s “Papiers collés” or,
more recently, through fractal murals like those by Sol LeWitt. The next sections were
devoted to the relationship between ornament and signs. The two last sections showed
the ornamentalisation of modernism and the birth of digital mass-ornament (including
video installations by Peter Kogler and Shirin Neshat).
It is important to emphasize that Ornament and Abstraction made wide use of the
Beyeler collection paintings, to which were added paintings borrowed from other museums
as well as creations (from various artists such as Kara Walker, Paul Buren or Sol LeWitt)
created especially for this exhibition.

1.2 The musical setup


The musical project consisted of five musical visits, each one being led by a musician
(violin, clarinet, tenor saxophone, double bass and percussion). There was no question of
“setting the pictures to music”, since there is absolutely no need to add music to a Picasso
or Sam Francis picture. In this respect, it was really a visit in the teaching sense of the
term, since the music primarily demonstrated existing relationships between pictures and
space or between the works of art themselves.
One visit was special in that, contrary to the other musicians, the clarinettist was
accompanied by a (speaking) guide. Both followed the “Tour Fixe”, an extensive visit
of the foundation (including Ornament and Abstraction). This tour, which looked like
any regular tour, was the “easy-access” visit and was followed by a large number of the
German-speaking audience (at least at the beginning of the performance, since many
people moved from one visit to another). Two tours (saxophone and double bass) were
devoted to the special exhibition Ornament and Abstraction. The last two visits were
thematic. One was about “Picasso and the human Face” (violin). Another was entitled
“Imaginary Landscapes” (percussion). The first explored the fabulous Picasso collection
belonging to Ernst Beyeler, as well as a large number of African and Melanesian masks,
and representations of the human figure. The second one focused mainly on abstract
landscape representations (Claude Monet, Frank Stella, Sam Francis, Francis Bacon. . . ),
actually passing through the Stella part of Ornament and Abstraction.
They were several reason for this idea of simultaneous visits. First, a museum, even a
relatively small one, is a place in which it is impossible to see everything. Therefore, the
presence of music in another room demonstrated the impossibility of hearing the whole
“concert” and, consequently, the impossibility of grasping all the pictures that were being
seen, had been or were to be seen. They were pictures that would never be seen. Also,
when two guided tours meet in a room, the listener’s attention was immediately drawn
— in a somewhat rebellious fashion —to the guide of the other group, especially if that
person was talking about a work already explained by the listener’s own guide. Sometimes
guides collaborated. A number of similar techniques were explored. The musical work
was performed by four large ensembles: a trio, two duets and a finale, with all five players
present (there were other smaller ensembles, with lesser structural impact). The entire
event lasted approximately 70 minutes.

32
Vuza Canons into the Museum

1.3 Some reasons for using Vuza canons


Outside of my personal interest, the Vuza canons were relevant to one important idea
developed in Ornament and Abstraction: the disappearance of the ground, or at least
an ambiguity existing between ground and figure. In Matisse’s Acanthes or Océanie, for
example, the figures (leaves) become part of the huge wallpaper that is the picture. This
applied also to the huge Sol LeWitt mural that marked the beginning (and the end) of
the visits.
Let us remember that these canons are non-trivial solutions to a basic rhythmical
problem: having identical rhythmical voices which, when they are all present, fill up
a rhythmical grid without overlapping. We say non-trivial, since there are obvious but
trivial solutions: when the grid step is a sixteenth note, for example, a single voice playing
each quarter note could be followed by a similar one a sixteenth later, a second one an
eighth note later, and finally a third one three sixteenths later. This would be a valid
solution since the grid is full, with no voice overlaps.
An interesting aspect of these canons is that they provide a relevant metaphor to the
premises of Ornament and Abstraction. The end result (when all voices are present) is a
rhythmical drone, like an ostinato: the dialectic between form and content is immediately
audible, since the end-result of such a contrapuntal display is a single continuous line.
The canons were used in many ways: for instance, the clarinet player gave a signal
every five minutes. This signal was itself the development of a canon. As with a tiling
canon, these calls ended up sounding like a continuum, but each call was also a more
complete version of the preceding one. The canon characteristic of these pieces became
less important than their tiling aspect.
The ensembles, as we mentioned, structured the performance. Three of those (corre-
sponding roughly to the first third, the second third and the end of the visit) are Vuza
canons, entitled Canon 1, Canon 2 and Canon Final. Canon 1 is scored for clarinet,
saxophone and double bass. Canon 2 for violin and berimbao (a Brazilian percussion
instrument which is a kind of musical bow). The last Canon starts with a repeat of
Canon 1, a different version of Canon 2 for violin, double bass and vibraphone and ends
after a bridge with a third canon with everybody playing.
Some other versions of Canon 2 appear at various moments. One of them, for solo
percussion, is of particular interest: it is scored for solo percussion and cetases to be
perceived as a canon, although when Canon 2 performed later on, it becomes obvious
that it is the same music. Each instance of the Canon 2 corresponded to a frame of the
Matisse pictures entitled Océanie.

1.4 Three practical problems


Although we will come back to the aesthetic reasons for using the canons in this project,
they are not the subject of this paper. When using the Vuza canons, several questions
arise, linked to aesthetic choices, but which end up being very practical problems indeed.

-The number of voices.


The simplest solution given for Vuza canons is a six-voice canon on a period of 72 units.
What are the practical solutions when there are five players (or less) playing a six (or
more) voice counterpoint with a relatively perceptible result? More especially, is the
canon character (the rhythmical unity of the voices) lost?

33
Georges Bloch

-The relationship between canons.


Is it possible to generate a large scale form from several canons? More precisely, how to
make them relate, especially if they have neither the same period nor the same number
of voices?
-Continuum or texture?
When the voices are played by different instruments, with diverse attack times and res-
onance characteristics, even a very precise execution will fail to create the impression of
a continuous line, except at very slow tempi. It is perhaps more useful to think in terms
of a texture of equal weight and equal distribution. Working out the harmonic evolution
in such dense textures cqan be difficult, especially with notes changing at every division
of the beat. That is why a global texture evaluation can be helpful.

2 Basic strategies for choosing canonic values


The choice for the Vuza values was obviously aesthetic. However, the selection process
is interesting, since it is based on an examination of the existing values.

2.1 Examining the rhythmical characteristics of the values


Let’s take the period N = 144 as an example. This is a twelve voice canon, each composed
of twelve note-events in one period. Below are the values given by the Vuza reconstruc-
tion algorithm. R is the time interval between the individual note-event of one voice,
and S the time interval between the entrances of the voices. They are interchangeable.

R S
(1 5 3 21 19 8 15 6 19 5 3 39) (2 2 14 2 2 14 2 16 58 16 2 14)
(1 5 24 11 8 23 6 11 8 5 35 7) (2 14 2 16 4 16 16 22 16 16 4 16)
(1 6 1 40 1 7 17 6 17 8 17 23) (2 16 2 16 2 16 10 18 18 18 10 16)
(1 6 33 8 1 24 6 9 8 25 15 8) (2 16 10 10 16 10 18 10 16 10 10 16)
(1 7 5 12 23 8 17 12 11 1 7 40) (4 10 4 14 4 14 18 26 18 14 4 14)
(1 7 5 35 8 5 12 12 11 8 29 11) (4 10 18 4 22 10 22 4 18 10 4 18)
(1 7 17 23 8 11 6 23 1 7 35 5)
(1 8 33 7 8 9 6 25 8 9 24 6)
(1 12 12 15 8 25 12 3 8 1 39 8)
(1 12 27 8 13 12 12 3 8 37 3 8)
(1 24 15 8 19 6 15 8 1 39 3 5)
(3 3 5 37 3 8 13 6 21 8 13 24)
(3 3 37 5 3 21 6 13 8 21 19 5)
(3 6 24 7 8 27 6 7 8 9 31 8)
(3 8 13 27 8 7 6 24 3 8 31 6)
(3 9 12 19 8 21 12 7 5 3 40 5)
(3 9 31 8 9 12 12 7 8 33 7 5)
(5 6 29 8 5 24 6 5 8 29 11 8)
Again these values can be freely permutated. Clearly, in this case, most R values
display multiples of 3 (or even 9), and S values favor multiples of 2 (or even 4). This

34
Vuza Canons into the Museum

would imply a ternary rhythm for the subject of the canon, and a binary one for the voice
entrance points (or the opposite, if we take R as entrance points and S as subjects). This
result is not at all surprising, since 144 = 24 × 32 . This is not a general result: in the
case of N = 108 (22 × 33 ), we can no longer assert that values for R are “ternary”, and
that S values are rather ternary than binary.

2.2 The value of finding redundant rhythms

One of the interesting features of the canons is the way the music, starting out as a rec-
ognizable rhythmical shape (at first emphasized by the canon entries), gets transformed
into a continuum. Thus, finding redundant rhythms inside the individual voices can be
of value.
By implementing a short OpenMusic program that analyses R time-point values, we
can find those that are most divisible by 9. Of course, we take the time-points into
account, because they give rhythmical values relative to a given beat: this means that
(1 5 3 21 19 8 15 6 19 5 3 39), for example, will be transformed, into (0 1 6 9 30 49 57
72 78 97 102 105 [144]). Taking the permutations into account is important: indeed, if
we consider a list of values such as (1 8 9 9 9 9 9 9. . . ), it would provide points perfectly
on the beat of a time-division by 9; but this is not true of (8 9 9 9 9 9 9. . . 1), where all
points other than the first will fall off-beat.
In the case of N = 144, the program computes a sum of the reminders by 9 of the
time-points of all R elements (as well as all their permutations). For the Canon 1 (played
in front of Matisse’s Acanthes the solution (3 6 24 7 8 27 6 7 8 9 31 8) the third best
solution was chosen. There are several (mostly subjective) musical explanations for this
choice: in order to have a clear rhythm, we wanted to avoid a “scotch snap” (short value
followed by a long one, like 1 8) at the onset, and this rhythm was present in the other
solutions. The third note, long, accented and clearly on the first beat makes for a clear
beginning. In addition, all values with the accent on every 12th unit were eliminated so
as to avoid a binary feel.
One has to keep in mind that all this rhythm clarity rapidly disappears as each voice
enters. That is the reason for choosing an almost “cliché” pattern. We actually selected
a ternary rhythm for the voice patterns of all canons. The reason for this was that we
desired to create a link between all the canons. This is the topic of our “second problem”,
namely the construction of a relationship between canons of different periods.

2.3 Ordering the voices

In order to make the redundancy more obvious, and therefore to delay the appearance
of the continuum-like texture, we had the voices which start “on the beat” appear first.
If we take 9 as the measure division, a good result for nine yields also a good one for 3,
therefore implying a ternary division of the beat (for example, a measure in 9/16, where
the sixteenth note is the rhythmical unit of the canon). So the voices starting on 0, 3, 6
will also reinforce the sensation of triple time.

35
Georges Bloch

3 Reducing the number of voices


The Vuza theory yields in many-voices structures. The smallest one (not used in the
Une Empreinte sonore) has a period of 72 rhythmical units and is divided into six voices.
Six is the smallest possible number of voices. This can become a problem when one is
dealing with a small number of instruments or, more simply, with a more limited number
of streams in the polyphony.
The problem becomes particularly obvious when we reflect that the end result of the
process is a continuum, implying a monophonic texture (whether this is unfounded or
not).

3.1 The basic solution: compound voices


In some respects, the problem of the number of voices is a false one: there are countless
works in the history of western music written for a monophonic instrument (for example
Bach’s flute pieces, etc.). The classical solution is to compound the voices, that is, to
merge two lines into one by playing the notes when they are supposed to appear and stop
them when another one appears, whether or not it is the same voice. This is common
procedure in baroque music.
Naturally, the same technique is perfectly applicable to Vuza-canons. In our project,
the clarinet made a “call” every five minutes. These were Canons, with one voice being
added every ten minutes.

3.2 Sub-canons
In the case of a compound solution with several instruments, another problem obscures
the canon characteristics: a given value of S sets the distance between the entrances
of the voices. There is very little chance that the distance between the voices of each
compound will be the same. As an example, letı́s imagine a period of 144 with S=(2 2
14 2 2 14 2 16 58 16 2 14). We can use permutation and put 58 in the end, which gives
(16 2 14 2 2 14 2 2 14 2 16 58). If we try to generate four voices from these twelve, the
starting points will be as follows:

V1 = (0, 0+16, 0+16+2),

V2 = (32, 32+2, 32+2+2),

V3 = (50, 50+2, 50+2+2),

V4 = (68, 68+2, 68+2+16).

Only voices two and three display similar compound values (start, start+2, start+4).
The other voices are different. This gives us an interesting lead in selecting the sub-
canon voices: if we can find a case where all values are the same, we will have similar
sub-canons.

36
Vuza Canons into the Museum

Getting a smaller number of voices


In our search for a smaller number of voices, we will therefore mix (compound) some
voices together. The result will be called a metavoice.

Using the “on the beat — offbeat” strategy


For example, with a ternary triple time (that is three beats divided into three subdivi-
sions) we can easily distinguish when the voices start. We have already emphasized how
we can make the basic voice rhythm (here ternary) more obvious by mixing the voices
starting on the beat. Here is an example: with a period of 144 and S = (18 10 16 10
10 16 2 16 10 10 16 10), the attack time points are (0 18 28 44 54 64 80 82 98 108 118
134). A simple modulo 9 operation shows that 0, 18, 54 and 108 start “on the beat”. 28,
64, 82 and 118 start one unit later. And that 44, 80, 98 and 134 start just before the
beat. We would therefore build our metavoices with these three rhythmic groups. One
starting on the beat M1 =(0 18 54 108), one one sixteenth later M2 =(28 64 82 118) and
one one sixteenth before M3 =(44 80 98 134).
However these metavoices are not themselves similar. This is because the distance
between the first and the second subvoices is 18 in the first metavoice and 36 in the
second and third metavoices: they cannot yield the same global result. We cannot speak
of canons between metavoices, since the compound rhythmical patterns are different.

Looking for real self-similarities


But the subvoices of M1 are, as we have seen, start on 0, 18, 54 and 108. Therefore, the
distances separating the respective subvoices are 18, 36 and 54. For M2 , the subvoices
start on 28, 64, 82 and 110. We find 18 between 64 and 82, 36 between 82 and 118,
and. . . 54 between 118 and 28 on the next period (the period is 144, 144+28 = 172, that
is 118+54). We find the same result for the third voice, if we take 80, 98, 134 and 44
(see figure 1).
The basic time unit is the sixteenth note, and R = (3 6 24 7 8 27 6 7 8 9 31 8). The
pitches displayed here carry no value whatsoever, they simply allow a further distinction
to be made between the various voices. The first four voices (first metavoice) start on
the beat. The next four voices start a sixteenth just after the beat, and the last four just
before the beat. The reordering shows how the voices could be self-similar by folding
some of them to the next period. In other words, by delaying the entrance of voices 7
and 12 until the next period.
Examining the first voices, we notice that the distance between voices is two bars
(=18 units), then four bars, then six. But this is also the case for the next four lines and
the last four lines. For voices 4 to 8, the distance between voices is two bars, then four,
then by folding to the beginning, six. We can generate a completely similar voice if we
delay these last voices by one period.
We can therefore change the S to emphasize the self-similar aspect, pushing the start
of metavoices 2 and 3 to their proper place. Instead of (0 18 28 44 54 64 80 82 98 108
118 134) as starting points, we get (0 18 54 64 80 82 98 108 118 134 144+28 144+44).
Let us use different typographical characters to distinguish our metavoices: (0 18 54 64
80 82 98 108 118 134 144+28 144+44), which gives a transformed S, (18 36 10 16 2
10 10 16 38 16 -44). There is actually another way of constructing metavoices: we can
generate four metavoices by using voices of the same rank (0 18 54 64 80 82 98 108 118
134 144+28 144+44)

37
Georges Bloch

Figure 1. The first period of a canon of period 144, with the voices reordered in order to
display rhythmical similarities

Conclusion: a slower start and the loss of the maximal category


At this point we have transformed a twelve-voice canon in a three-voice canon, each voice
of which is made of a four-voice canon. This is an impressive self-similar structure, a
canon made of canonic voices.
However, as we know from theory, it is an illusion that there are three-voice tiling
canons in a 144 period. And here theory is borne out by practice. A careful examination
of the metavoices shows that they actually have, as the theory would predict, limited
transposition characteristics. For example, the metavoices in Canon 1 repeat three times
the same rhythmical pattern (see figure 2).
However, it can be taken as a virtue, because the period (48, that is a third of 144)
still is relatively long, and the repetition can be taken as a characteristic of the theme.

38
Vuza Canons into the Museum

Figure 2. The same “Canon 1”, in its third period, as all voices have entered. The “maximal
category” characteristic is completed, and we note that there is a continuum every sixteenth
note with no voices sounding simultaneously. We note (relatively easily in the clarinet part)
that, as the theory predicts, the metavoices are actually repetitive patterns. The rhythm of the
clarinet on the first five bars get repeated, starting on the second beat (E dotted eight) of the
sixth bar, and again on the third beat of the eleventh bar of the excerpt (sung A[). The change
of beat at the end of the example prepares the transition to “Canon 2”

4 Relationship between different canons


The question of the relationship between canons is even more important, if one wishes to
create Vuza canons with a form that is larger than or different from the simple repetition
of the periodic pattern. We noticed a similarity between the subcanons of different
periods, which led to the idea of canonic modulation.
This is like any rhythmic modulation. As we know, there are two types of these:
either the unit stays the same, or the whole group stays the same. We explored the
latter case, by constructing canonic modulations that maintain the same duration for
one period, but that has canons of different periods.

4.1 Canonic modulations


The idea of canonic modulation arose when we noticed similarities between the sub-
canons. We have seen that we had:

39
Georges Bloch

Canon 1 12 voices Period 144 divided into 3 subcanons 0 64 80


Canon 2 6 voices Period 108 divided intp 3 subcanons 0 48 60

This creates a simple but remarkable numerical relationship:


a- 144 = 108 × (4/3)
b- 64 = 48 × (4/3)
c- 80 = 60 × (4/3) to which we can add
d- 0 = 0 × (4/3)
We can chose to have the same span of time for the periods of both canons (that is, the
canon of 144 elements goes 4/3 faster than the canon of 108) and, more remarkably, each
corresponding metavoice will start at the same moment in the canon.
A convenient notation is Canon 1 in 9/16 (the unit being the sixteenth note), and
Canon 2 in 3/4 (the unit being the eighth triplet). 144 sixteenth notes are equivalent to
36 quarter notes, as are 108 eighth triplets.
In this case, not only do the whole canons have the same duration, but the three
metavoices start at the same time in the canon. M1 starts after 64 sixteenth notes =
48 triplets = 16 quarter notes, and M2 starts after 80 sixteenth = 60 triplets = 20
quarter notes. The same logic is used in the last part of the final canon. The period 216
(= 108 × 2) is divided into sextuplets.
This gives us a canonic modulation from canon 1 to canon 2. The entrances of the
voices are tiled.

4.2 Analysis of an example


The figure 3 shows the passage between “Canon 1” and “Canon 2” in the Canon Final.
This page exactly follows the music shown in the preceding figure. This example is worth
careful analysis, as it demonstrates most of the processes examined in this paper.
Let us first examine “Canon 1”. As we said, it is a Canon of period 144 sixteenth
notes, with R= (3 6 24 7 8 27 6 7 8 9 31 8) and the voices starting on (0 18 54 64 80 82
98 108 118 134 144+28 144+44). The clarinet was the first to start, and is the first to
end. The saxophone was the last to start and is the last to end. The cycle lasted for 16
bars in 9/16 time, but the beat has changed to 3/4 (at the same speed for the sixteenth
note). So the cycle now only lasts twelve measures. It is significant that the last note
of the saxophone comes more than 144 sixteenths after the beginning of the last cycle,
since the exposition of the voices lasted more than one cycle.
Because the last voice of each subvoice is relatively isolated, the last notes are heard
with their full values (8 9 31 8). This is clearly seen in the four final notes of the clarinet
(first system) or the saxophone (second system).
Twelve bars is also the length of Canon 2. The rhythmical modulation is calculated
so that the period of Canon 1 is equal to the period of Canon 2. Since the Canon 2 has
only 108 notes in its period, it is somewhat slower, the unit value being the triplet.
The R value for canon 2 is (9 5 1 1 5 25 4 1 1 5 2 9 14 5 1 5 11), and we recognize
the values and even the notes of the clarinet “appel”. The time points for the entrances
of the voices are (0 48 60 81 129 141). The violin enters with the first voice, then
the vibraphone, then the contrabass in pizzicato on the last beat of the seventh bar of

40
Vuza Canons into the Museum

Figure 3. The modulation between “Canon 1” and “Canon 2” in Canon Final. This page just
follows the preceding example. We are in 3/4 time, but the clarinet and the saxophone round off
the “Canon 1” in 9/8 (although they are notated in 3/4 for practicality). The violin enters with
the first metavoice of “Canon 2”, then the vibraphone, then the contrabass in pizz. The latter
instrument goes directly from Canon1 (in sixteenths, arco) to Canon 2 (in triplets, pizzicato).
The first cycle is completed four bars before the end of the example (we see the violin starting
again on high C)

the excerpt. The second voice of the violin happens only 81 triplets later, i.e. at the
beginning of the second system (it is mostly played in harmonics).
The contrabass alternates between arco and pizzicato until its part in Canon 1 is
completed. As for “Canon 1”, one cycle is insufficient for the entire the voice, and
therefore we have to wait for the last bar of the example to hear a measure in which all
triplets are played.

5 Texture and constraints


The last passage is very interesting Canon Final. This is a 216-period structure and, with
the process of modulation, the events happened twice as fast. The duration of a whole
cycle being the same, the unit was now the sixteenth note sextuplet. The quantity of
notes and the fact that they (supposedly) did not occur at the same time helped to create
a very dense texture. In this case, the ambiguity was not between line and contrapuntal
construction, but between counterpoint and texture.

41
Georges Bloch

Even with a very precise performance, it was impossible to hear it as a continuum,


since the tempo was not slow and the instruments had very different types of attack
and response time (a vibraphone, a double-bass and a clarinet are clearly very different).
What was perceptible, however, was the relatively equal distribution of events, enforced
by the principle of maximal category canons: it was perceptible that the vibraphone, the
only instrument playing two voices at a time, contained a denser passage.
However, it was difficult to devise a means of controlling the harmonic structure while
keeping something like a canonic melodic relationship between the voices. That is where
constraint programming was useful.
Charlotte Truchet developed, at Ircam, a constraint programming environment using
OpenMusic. This made it possible to test several melodic solutions subject to constraints:
the first constraint was that the voices should be melodically in canon, within certain
limits. The second constraint was to to tend towards a harmonic reference texture.
Because the program operates by random selection, it was even possible to give the
melodic constraint less and less importance, while giving the harmonic one more and
more. Thus the final canon converged towards a unified texture, as the original melody
progressively disappeared.

6 Conclusion
Only OpenMusic allows research of this scope. We encountered two types of problems:
one, the maximal category canons, presents an obvious long known musical problem. The
solution, however, depends on the computer tools used for musical representation. The
devices found while using them stem from the very concept of the canons: using voice
mixing for example, to find sub-voices that have self-similar characteristics. Rhythmic
modulation between a canon and another is another very interesting possibility. The
best solutions can easily be computed automatically with OpenMusic. A more general
consequence is that many canonic structures with varying periods can be grouped.
The second problem is solved in a more empirical way. It turned out that constraint
research could be used as a development process.
We would like to express our thanks to Carlos Agon, Moreno Andreatta and Char-
lotte Truchet, who make all this look simple!

Strasbourg, July 2004

42
Bibliography
[1] Andreatta, Moreno : Méthodes algébriques dans la musique et la musicologie di
XXe siècle : aspects théoriques, analytiques et compositionnels. Thèse de doctorat
EHESS, Paris, 2003.
[2] Truchet, Charlotte : Contraintes dans OpenMusic. Conception et développement de
programmation par contrainte adaptées à la composition et l’analyse musicale. Thèse
de doctorat Université Paris 7, Paris 2003.

[3] Bruederlin, Markus : Ornament and Abstraction, Exhibition catalog. Dumont / Fon-
dation Beyeler, 2001.

Georges Bloch

Georges Bloch began studying composition rela-


tively late, at UC San Diego, after graduating in
Engineering at the Ecole Centrale de Lille. He also
performs as a singer. Born in Paris, he lives now
in Strasbourg, where he teachs at the University.
Georges Blochı́s music is based on three different
but nevertheless perfectly compatible centers of in-
terest:
- Music and space: Palmipèdes d’agrément was the first piece using the Ircam
spatialiser; Palmipèdes salins takes advantage of the particular acoustics of
the Salt factory conceived by Nicolas Ledoux in the 18th century; Fondation
Beyeler: une empreinte sonore offers five simultaneous musical visits to the
Foundation Beyeler in Riehen, Switzerland.
- Interaction, mainly based on the paradox of composed improvisation (Jimmy
Durante Boulevard, Palm Sax); more generally, computer assisted composition,
in real time or otherwise. He is presently associated with the Omax project
(computer assisted improvisation).
- Collaboration with other artists – mostly sculptors or painters (Souvenirs et
moments is based on pictures by Jean-Michel Albérola, inserted into the score).
A piece such as Palmipèdes corbuséens palmés combines the three character-
istics: a mezzo-soprano wanders into the strange acoustic space of a water
tower built by Le Corbusier, and the building itself is made to resonate by an
interactive sound sculpture.

43
TimeSculpt in OpenMusic
- Karim Haddad -

Abstract. In the last decade, as regards musical composition, my work has been
essentially focused on musical time. I have used mostly computer aided composition
and most particularly, OpenMusic. This article deals with compositional strategies and
musical duration, and skips considerations regarding pitch in order to better focus on our
subject except when pitch is related to our main disscussion1 .

***

1 How ”time passes in OpenMusic ...”


In OpenMusic time is represented (expressed) in many ways. Time could be:

• a number (milliseconds for instance),

• a rhythm tree (an OpenMusic representation of musical rhythm notation [1]),

• a conventional graphical symbol (a quarter note) in OpenMusic Voice editor,

• an ”unconventional” graphical object (an OpenMusic temporal object).

Another important issue is how musical time is conceived internally (i.e implemented
as a structural entity) [2].
Most computer environments and numerical formats (MIDI for instance), represent
the musical flow of time broken down into two expressions:

• the date of the event (often called onsets),

• the duration of the event (called duration).

We notice already that this conception diverges from our own traditional musical
notation system which represents a compact, readable time event + duration2 . The
only reason for this is that generally the computer representation of time is made not
with symbols but with digits3 . That, I believe, is why today’s composers should make
themselves familiar with a ”listing” representation of musical events.

1 One can argue that these fields are indissociable and most particularly rhythm and duration. We

will consider in this article that duration is from a different order than rhythm (think about Pierre
Boulez temps strie and temps lisse in Penser la musique aujourd’hui [3] which is a well accepted view
nowadays).
2 Through this conception tempo is meaningless.
3 Time is sequentially expressed and not iconically symbolized (as a whole entity like a measure

containing rhythm figures, having a tempo assignement and a rhythm signature).

45
Karim Haddad

Since the MIDI standard is integrated in OpenMusic, this representation type is


common to most musical objects (i.e. Chord-seq), and since there are OpenMusic libraries
that generate CSOUND instruments that use this ”time syntax”, one needs to learn this
particular representation in order to deal with these objects accurately, especially when
applying them to controlled synthesis.

1.1 dx->x and x->dx


dx->x and x->dx are easy to use and are very practical when it comes to duration and
rhythm.

• dx->x computes a list of points from a list of intervals and a <start> point,

• x->dx computes a list of intervals from a list of points.

Starting from a list of duration values in milliseconds and a starting point, dx->x will
output a list of intervals (duration values) according to these parameters. Vice-versa,
x->dx will output a sequence of time-dates starting from a list of duration values.
To illustrate this mechanism, we start with a marker file created with SND. This file
represents a list of time events automatically or manually generated. In figure 1, the
soundfile was marked manually.

Figure 1. Markers in SND

Once the analysis file is imported into OpenMusic, we quantify it using omquantify,
as shown in figure 2.
Careful examination of the rhythmical result may reveal a wide range of duration
values (in the present case they could be considered as a series of durations). We might
also treat it as a sequence of accelerando/decelerando profiles (modal time durations). In

46
TimeSculpt in OpenMusic

Figure 2. Quantification of markers

either case, it is a potential rich rhythmical material. It can be used to notate symbolically
and/or integrate a sound file into a score4 .
We can of course extend this ”symbolic” information and consider it as a compositional
material for example, applying to it contrapuntal transformations such as recursion,
diminition or extension, simply by multiplication, recursion, permutating etc...

1.2 Combinatorial processes of rhythmical structures


Another aspect of rhythm manipulation that I use is exactly opposite to the preceding
example. Instead of extracting rhythm from a physical source (i.e such as a soundfile)
I apply directly combinatorial processes of rhythmical structures using the internal defi-
nition of rhythm in OpenMusic called Rhythm Trees [1]. This is a wonderful technique
for creating any imaginable rhythm, simple or complex, and since the RT standard is
both syntactically and semantically coherent with musical structure, it makes rhythm
manipulation and transformation efficient.
It is for this reason I came to write the Omtree library, which was basically a personal
collection of functions, for OpenMusic. The functions allow 1) basic rhythm manipula-
tions 2) practical modifications and 3) some special transformations such as proportional
rotations, filtering, substitution, etc...

4 It is also very practical for coordination between musicians and tape.

47
Karim Haddad

The whole stucture of ”...und wozu Dichter in dürftiger Zeit, ...” for twelve instru-
ments and electronics is written starting from the generic measure in figure 3.

Figure 3. Generic measure of ”...und wozu Dichter in dürftiger Zeit, ...”

which corresponds to the following rhythm tree:


(? (((60 4) ((21 (8 5 -3 2 1)) (5 ( -3 2 1)) (34 ( -8 5 3 -2 1))))))
Rotations are calculated on the main proportions based on the Fibonacci series (ro-
tations of D elements – durations, and rotations on the S elements also – subdivisions).
The first rotation is shown in figure 4.

Figure 4. First rotation

The corresponding rhythm tree is as follows:


(? (((60 4) ((5 ( 2 1 -3)) (34 ( 5 3 -2 1 -8)) (21 (5 -3 2 1 8 ))))))
This is generated by the patch in figure 5.
The result is a six-voice polyphony. The pitches are also organized in ordered rotation
and heterophonically distributed among the six voices (figure 6).
The excerpt in figure 7 shows the same result but after quantification.

2 Topology of sound in the symbolic domain


2.1 Form
We can consider sound analysis as an open field for investigation in the time domain,
where interesting and unprecedented time form and gestures can be found and used as
basic (raw) material for a musical composition. This new approach is made possible
thanks to fast computers. But sound analysis being a vast field of research (in which
one can find multiple kinds of analysis and visual or raw data formats), one must take
care not to forget the nature of this analysis and its original purpose. Sound analysis
could therefore be a rich source from which symbolic data can be retrieved and freely
remodelled according to the composer’s needs.
Sound analysis data can also be considered as potential correlated vectors. These,
depending on the analysis type, can be streams of independent linear data, or more

48
TimeSculpt in OpenMusic

Figure 5. Rotation patch

Figure 6. Six-voice polyphony

interestingly still, data arrays. The types of data found most often are directly related to
the nature of sound. On an abstract level, this can be regarded as a pseudo random flow
or considered as coherent interrelated orders of data, again depending on the analysis
type chosen.

49
Karim Haddad

Figure 7. ”...und wozu Dichter in dürftiger Zeit?...” for twelve instruments and electronics

When using sound analysis as a basis for music material production and most par-
ticularly in the time domain, it is important to note that the following approach is not
a ”spectral” one in the traditional sense5 , but on the contrary should be considered as
a spectral-time approach. Frequency domain will be translated into time domain and
vice-versa following the compositional context, as will be shown later in the present
article6 .
This ”translation” is possible with the wide variety of analysis types (additive, modRes
resonance modes, etc), not forgetting the many data formats available, whether in visual
or numerical form.
The way the material is used depends on the musical project. Different orders of
”translation” in the symbolic field can be applied. Form can be literally extracted from
the analysis data or taken from symbolic material. The mixed sources (symbolic and
analytical) are then fused together in the compositional process, and that is where the
tools are very important. OpenMusic is like a black box, in which the analytical and the
symbolic come together in a kind of fusion in the field of musical time.

2.2 No one to speak their names (now that they are gone)
The structure of No One To Speak Their Names (Now That They Are Gone) for two bass
clarinets, string trio and electronics, is based on an aiff stereo sound file of 2.3 seconds
duration.

5 Meaning that form and strategies are primarly based on pitch.


6 This was the initial approach in Stockhausen’s well known article “...wie die Zeit vergeht...” [7]

50
TimeSculpt in OpenMusic

Figure 8. Segmentations

Considering the complex nature of this sound file (friction mode on TamTam), it has
been segmented into 7 parts (figure 8). The segmentation is based on the dynamic profile
of the sound file.

Figure 9. Array of n dimensions

We may consider a sound as an array of n dimensions (as shown in figure 9) with


potential information that can be translated into time information. It seems natural to
construct this array using the additive analysis model (time, frequency, amplitude and
phase). This is rather a straightforward description that could be used to process the
sound directly into the time domain or for eventual resynthesis. Other sound analysis-
description is available such as the spectral analysis, lpc analysis, the modRes analysis
and so on. Here, the modRes analysis was chosen. All the examples described above
are discrete windowing analyses, from which the time domain is absent. Most of them
have time addressing, but the last one (modRes analysis) is an array of dimension 3
(Frequency, amplitude and bandwidth/Pi) computed by the patch in figure 10.

51
Karim Haddad

Figure 10. Frequency, amplitude and bandwidth data

A sound analysis/description and an array of n dimensions can be translated into


the time domain from array to array, i.e. the analysis data could be read in any plane,
vertically, diagonally, etc... or any combination of arrays. This ”translation” is of course
arbitrary and is meant to be a translation in the symbolic domain, the score being another
kind of array. Although the operation may seem arbitrary, (which indeed it is), in my
opinion there are two pertinent points to be considered.
Firstly, (as we will see later) the sound array is processed in a completely interde-
pendent way, taking into account all the proportionate weight relations contained within
it. The coherency of the sound resonance will be so to speak ”reflected” in the symbolic
domain through specific analogical modes (dynamics, durations and pitch), which are not
supposed to be literally associated one by one (i.e. exact correspondence of parametrical
fields is not necessary). In this piece they are permutated.
The second important point is that this translational strategy establishes a strong
relationship between the electronic and the acoustical components of the piece creating
strong formal fusion.
Moreover, if we visualize the given data in a 3 dimensional graph (see figure 9) we
will see many folds (”plis” [5]) of different densities. These are directly related to the
polyphonic gesture representing the point-counterpoint technique used in the score.
As we can see in the figure 11, there are two parallel musical processes: the electronic
part (tape), which is also entirely constructed with the initial material (the Tam-Tam

52
TimeSculpt in OpenMusic

Figure 11. Two parallel musical processes

sound file), and the score part. The semantic bridge is shown as a dashed arrow. It
is through analysis that both domains communicate with each other7 . In the case of
resynthesis, another bridge could be established in the other direction (from symbolic to
sound domain) but this is not the case in our present composition.
In No One To Speak Their Names (Now That They Are Gone), using the modRes
analysis, the bandwidth array has been chosen to order each pitch set in each fragment
according to bandwidth. For each parsed pitch segment we will again establish a propor-
tional relation: all pitches/highest pitch. These proportions will be used as snapshots
of seven sonic states in a plane of a three-dimensional array (x, y, z), each state being
the sum of all energy weights within one window. We will use them to determine our
durations throughout the composition. The durations are of two orders types:
• Macro durations that represent metric time and determine a subliminal pulse illus-
trated by dynamics. Measures are calculated following the proportions computed
from the last segment.
• Local durations consisting of effective duration values from the four instruments.
These are distributed according to proportions on either side of the measure bars,
creating asymmetrical crescendo-decrescendo pairs.
The main concept of the whole work dealing with pitch/bandwith/dynamic weights is
that of symmetry. As we have seen in the example above, we can use it as a compositional
element.
Starting from one mode of resonance which was assigned to durations following our
proportional translation (see figure 12)
we will apply to it a new axis of symmetry where all durations will start from and
then continue in asymmetric mirroring, as shown in figure 13.
This was calculated by the reson-chord box patch (see figure 14) and then quantified
(figure 15).

7 Analysis could be thought of as another potential aspect of a sound file, or in other terms, it is an

alternative reading/representation of sound.

53
Karim Haddad

Figure 12. Resonance mode durations

Figure 13. -35 degrees symetrical axe

Figure 14. -35 degrees symetrical axe patch

54
TimeSculpt in OpenMusic

Figure 15. Quantification

Durations are not the only elements calculated from analysis. Starting from measure
59 (figure 16, pitches are extracted from analysis and distributed over all four instruments
following an order based on bandwidth over amplitude generating weight criteria, in
descending order of values).

Kr µœ 34 µ ˙ .
&c ∑ Ó. ® ˜ œj . . 34
œ ‰.. Œ 24 Œ Œ
59 5:4

˜˙ . ˜œ R
vl.
F Psub. π

˜œ œ w m˙ .
Bc Ó ‰. R 3 Ó. 2 Œ Œ
5:4
˜œ 3 ˜˙ .
59

4 4 R 4
alt.
Fsost. Psub. π

?c Œ 34 24 Ó 34 Œ
5:

Œ j Ó. B ≈
59
7:4
vcl
nœ . ˙ nw n˙ .
Fsost. Psub.

&c ∑ ∑ 34 ∑ ∑ 24 Ó 34 Ó .
59

clb.I

µœ w 3 µ˙ œ. 2 Ó 3 ‰ nœ
& c Ó. ≈ Ó. ˙
59 3:2

4 4 4
7:4
clb.II
F sost. Psub. π
Ø
c 3 2 3
59

∑ ∑ 4 ∑ ∑ 4 ∑ 4 ∑
bde
c 34 24 34
59

∑ ∑ ∑ ∑ ∑ ∑

Figure2 16. Excerpt from 3NoÓ .One To Speak Their n˙ n˙ œ Œ Gone). ‰.. µœ œ
&4 Ó Œ Names c Ó
3:2
67
(Now That They Are
4 J RÔ
Ø
vl.
P Ø
55
µœ . 3 µ˙ . µ˙ . mœ œ
B 24 ‰ n·
3:2

‰ Œ c Ó. ‰..
67

4 RÔ
alt.
Ø P
p
˜˙ . ˜˙ . ˜˙ . ˜œ 5:4

?2 ∑ 3 c ≈ Ó ® n œj . .
67

vcl
4 4
P Øsub. P Ø

2 3
&4 Ó ∑ ∑ ∑ c Ó
67

clb.I 4
Ø
nœ . n˙ . n˙ .
34 n ˙

2
5:4

& 4 nœ ‰ ‰. J c J ‰ Ó.
67

clb.II
J Ø P

2 3 c
67

4 ∑ 4 ∑ ∑ ∑ ∑
bde
Karim Haddad

3 Hypercomplex minimalism
3.1 Sound analyis for controlling instrumental gestures
In contrast to the examples we have already seen, where data took the form of 3D
information arrays, and which were therefore complex, here we see concrete use of a
simpler 2D sound data array.
Ptps analysis is a pitch estimation analysis (pitch and time arrays). When applied to
a noise source or inharmonic sound the analyis output tends to yield interesting profiles
(figure 17).

Figure 17. PTPS analysis

This data will be used after being broken down into n fragments as a means of
controlling musical gesture (figure 18).

Figure 18. Fragmented analyis in OpenMusic

The fragments will be considered as potential data for the dynamic control of instru-
mental gestures (figure 19).

56
TimeSculpt in OpenMusic

Figure 19. Bow pressure, placement and speed control of the doublebass part in ”In lieblicher
Blaue...” for amplified bass saxophone, amplified doublebass and electronics.

These ”potential” fields will be subsequently filtered and assigned according to musical
context. As we mentioned above (section 2), the relevance of this technique arises from
the fact that all sound sources used for analysis or in the tape part of the piece are taken
from samples pre-recorded by the musicians, using special playing modes (multiphonics,
breathing, etc) (see figure 20).

Figure 20. Excerpt from ”In lieblicher Blaue...” for amplified bass saxophone, amplified dou-
blebass and electronics

One however must also take into consideration the fact that musical events are a
balance between complex gestures in the process of sound production and minimal ac-
tivity in note and rhythm production, i.e we might distinguish two layers of activity:
”traditional” score notation, and control notation.

57
Karim Haddad

3.2 Adagio for String quartet


Again in this work, a soundfile served as starting point for the whole piece. However, the
technique is completely different. Instead of using an external analysis program, all the
work was carried out in OpenMusic.
Using OpenMusic’s handling of soundfiles, which is limited to playing and representing
them in the SoundFile object under a time/amplitude curve, typical of all sound editors,
it was my intention to use limited and reduced data in order to create a closer affinity
with the symbolic mode, keeping in mind the instrumental nature of the string quartet.
I therefore used the BPC object and downsampled the amplitude curve in order to
have a globally satisfying overview of the amplitude profile (figure 21).

Figure 21. Generating durations from an amplitude enveloppe

58
TimeSculpt in OpenMusic

The amplitude having two phases and due to downsampling a more accentuated dif-
ference was therefore created between the positive and negative phase creating a somehow
double choir polyphony (figure 22).

Figure 22. Amplitude curve transformed into double choir polyphony

I determined four axes intersecting the curves. Duration values were then quantified
starting from these segments (see figure 23).

Figure 23. Quantified durations

In order to verify the result, the two polyphonies were synthesised using Csound and
the result was put in OpenMusic’s maquette object (figure 24).

59
Karim Haddad

Figure 24. Result displayed in a maquette

Figure 25 shows the beginning of the Adagio for String quartet which was written
using the strategy describe above.

Figure 25. Beginning of the Adagio for String quartet

60
TimeSculpt in OpenMusic

4 Conclusion
In the compositional processes presented here, we can distinguish between two funda-
mental procuderes: data and algorithms.
Data in itself can be regarded as a conceptual idea. It represents the deterministic
drive of ”what must be” in a given lapse of time decided by the composer’s Deus ex
machina.
The algorithms may be seen as the executive part of the composer’s idea, also deter-
ministic when the data proposal is added, with dynamic decisional potential that models
the propositional data to its own creative role.
These two procedures or techniques are elements of a broad dynamic program, the
computational part (analysis and processing) having been carried out with different com-
puter programs such as OpenMusic, Diphone, etc, and can be considered part of a unique
program: the composition itself. It is legitimate nowadays to consider a work of art from
the performance and aesthetic viewpoints, but also from a deconstructural angle. I per-
sonally adhere to Hegel’s [6] thesis8 , siding with his view that art has fulfilled its purpose,
and that modern art cannot be understood in the same way as the art that preceded
it (from Descartes to Kant). Neither the post-modernist attitude nor techno-classicism
will allow the destiny of modern art to be accomplished. A meticulous study of the state
of art and of its own medium is necessary, something like the Renaissance. The French
composer and philosopher Hughes Dufourt states ”La musique en changeant d’échelle,
a changé de langage.9 ” [4]. Techniques in composition and sound exploration must be
integrated totally not only in the praxis of composition but in it’s understanding, and
better, as a wholly part of composition itself.

8 ”In allen diesen Beziehungen ist und bleibt die Kunst nach der Seite ihrer höchsten Bestimmung für

uns ein Vergangenes.” ( X, 1, p.16) ”In all its relations its supreme destination, Art is and stays to us
something that has been.” (X, 1, p.16).
9 In changing its scale, Music has also changed its language.

61
Bibliography
[1] Agon C., Haddad K. Assayag G. : Representation and Rendering of Rhythmic Struc-
tures. WedelMusic Darmstadt, IEEE Computer Press. 2002.

[2] Agon C. : OpenMusic : Un langage visuel pour la composition musicale assistée par
ordinateur. Thèse de doctorat de l’Université Paris 6, 1998.
[3] Boulez P. Penser la musique aujourd’hui. Paris Gallimard, 1987.

[4] Dufourt D. : L’oeuvre et l’histoire. Christian Bourgeois Éditeur, 1991.


[5] Delleuze G. : Le Pli - Leibniz et le baroque. Les éditions de Minuit, 1988.
[6] Hegel F. : Phänomenologie des Geistes. Hg. von 0. Weiss. Leipzig, 1907.
[7] Stockhausen K. ”...wie die Zeit vergeht...” Die Reihe, n◦ 3, 1957.

Karim Haddad
Born in 1962 (Beirut Lebanon). First mu-
sical studies at the National Conser va-
tory of Beirut. Studies in Philosophy and
literature at the American University of
Beirut. In 1982, settles in Paris (France).
B.A in musicology at the Sorbonne Univer-
sity. Follows at the Conser vatoire National
Superieur de Musique de Paris studies in
Harmony, couter point, fugue, orchestra-
tion, analysis and composition with Edith
Lejet (Harmony, Fugue), Bernard de Crepy
(Counter point), Paul Mefano (Orchestration), Jacques Casterede and Alain Louvier
(Analysis), and Alain Bancquart (Composition) where he obtains six rewards and
the Diplome Superieur in Composition. Workshops in composition with Klaus Huber
and Emanuel Nunes. Between 1992 and 1994 he par ticipates in the Ferienkursen
fur Musik of Darmstadt where he works with Brian Ferneyhough, and obtains the
Stipendienpries 94 in composition. In 1995, he follows the computer music courses at
the IRCAM , and becomes member of the IRCAM. Forum where he contributes in
1999, by writting the Om2Csound librar y for controlling synthesis through Open-
Music environment. He will then write OpenMusic’s reference and tutorial. Actually
works at IRCAM as a technical advisor for the Ircam Forum. His works are per-
formed by various ensembles and artists such as the Berlin StaatsOper, l’Itineraire,
2e2m, Orchestre Philarmonique de Radio-France, Diotima quartet, etc...

63
The Genesis of Mauro Lanza’s
Aschenblume and the Role of
Computer Aided Composition
Software in the Formalisation of
Musical Process
- Juan Camilo Hernández Sánchez -

Abstract. The present article will attempt to illustrate the compositional process of
Mauro Lanza’s work Aschemblume for nine instruments ensemble (Fl. Cl. Perc. Pno.
Vl. Vla. Vc. Cb.). The piece was a commission from the French Culture Ministry and the
ensemble Court-Circuit. In the first part, an introduction to the musical parameters that
unify the piece will be presented, as well as a description of the role of OpenMusic in the
pre-compositional processes; in the second part the musical material will be analysed with
a discussion of their construction in OpenMusic, and finally the fundamental structure
of the piece will be described to show how the sections are assembled. The writing of this
article was made possible thanks to a close collaboration with the composer.

***

1 Introduction
Mauro Lanza’s work is characterized by the mental conception of musical ideas followed
by a computer aid. The pre-compositional formalisation is a compulsory phase permitting
the composer to discern the material that could be quickly produced by the machine. The
term musical material will be used to name each minor section possessing autonomous
musical characteristics. The composer’s intervention in the CAC process takes place
with the programming of computerised tools that respond to the needs of his musical
language.
Aschenblume is a German word meaning ashes settling down taking the shape of a
flower. The word is taken from a poem by Paul Celan. The literal translation could
be ”Ash flower” and its literary context strengthens the semantic connotation of each
component, (the flowering through the vanishing of ashes). The process applied to the
initial material of the piece could be seen as a musical analogy of the word: the piece
begins in a rhythmic ostinato that undergoes harmonic and rhythmic disintegration and
gradually becomes a sustained chord. Then the material is reiterated several times, but
bearing a new musical element at each repetition. The evolution of these new elements
leads them to be progressively dissimilar from the initial material; the treatment given
to their common musical principles builds up the coherence between them.

65
Juan Camilo Hernández Sánchez

The musical path allows all kinds of interaction between the different bodies of musical
material; the handling of their interpolation and their contrast becomes the principal axis
of tension. The development of all the bearing materials is a disintegration process. An
important feature of this process is the gradual reduction of duration of each section until
the end of the piece where they are perceived as small fragments, keeping the essential
musical elements that characterised them. A brutal and regular pulsation finishes the
piece in the form of a collision between the contracted elements.

2 Pre-compositional process
The formalisation of the compositional process implies a resolution of the principles that
link the musical materials used in the piece. The main aspects are as follows:

• the development of a rhythmic hierarchical language by the polyphonic assembling


of rhythmic patterns,
• the creation of the harmonic field homogenising the sonority of the piece,
• the descending melodic shape. Each main body of material is governed by the idea
of a melodic descent,
• each section of the piece is reduced in duration.

The harmonic and rhythmic aspects are created almost entirely using OpenMusic; the
comprehension of this process is essential to the understanding of the musical material’s
construction and its development throughout the piece. Therefore, an explanation of the
rhythmic and harmonic formalisation by CAC will be presented in the introduction to
the analysis of the piece.

2.1 The rhythmic hierarchical language by polyphonic assem-


bling of rhythmic patterns
Rhythmic hierarchy and periodic patterns are the main rhythmic aspects developed by
Mauro Lanza. The hierarchy can be achieved when some specific points are emphasized
in a rhythmic sequence, creating varying levels of importance. The accented points
constitute an original rhythmic pattern that arithmetically generates all of the minor
rhythmic structures. In Aschenblume, as in other Lanza works, there is a polyphonic
treatment of the rhythmic hierarchies, the original rhythmic pattern is highlighted in the
points where all voices are assembled; each voice has its own duration and is repeated
throughout the whole length of the original pattern. The rhythmic sub patterns of each
voice are obtained from a modulo division of the original pattern, that allows sub patterns
to be generated, and that coincide with the original pattern onsets.
The onsets of the original pattern are expressed in ratios, which are note values
measured in relation to the beginning of the pattern that is the point zero. The divisor
indicates the division unity and the numerator the position of each onset. For example,
in a 21 sixteenths note pattern that has onset the 5th , 13th , 15th and 2st sixteenth note
will be represented as follows:
0 5/16 13/16 15/16 21/16

66
The Genesis of Mauro Lanza’s Aschenblume...

The possible duration of the derived pattern is expressed also as a ratio and it is
said to be a modulo. The numerators of the original pattern onsets are divided by the
numerator of the modulo; The remainders of each division are the onsets’ values that
determine the derived pattern, if there are repeated remainders this value has to be taken
just once. With a pattern that lasts 8 sixteenth notes the operation will as follows:

Original pattern onsets 0 5 13 15 21/8 Sub pattern duration


Remainders 0 5 5 7 5
Sub pattern onsets 0 5 7

Figure 1. Rhythmic pattern with two sub-patterns periodicities

The composer creates an initial CAC tool in order to accelerate this process. The tool
is an OM function called Subpatterns that generates the sub patterns with any modulo
and any division unity.
The rhythmic density depends on the number of notes in the modulo of a sub pattern,
and can also be controlled with another function created by the composer. The function
creates some restrictive constraints so as to choose the less dense patterns. In order
to solve the constraints the search-engine Pmc-engine (from Mikael Laurson’s PWCon-
straints library) is used. This search-engine yields solutions found in a search domain
giving preference to the solutions that respond to the constraints. The constraints are
rules and heuristic rules; the rules take the sub pattern solutions responding to a simple
true or false question, the heuristic rules select the sub patterns according to the desired
density value. Then the composer can find patterns determining the minimum value
allowed in a voice and control the number of notes of each sub pattern depending of its
density.
In some sections of the piece the pitch is also formalised in order to create melodic
patterns corresponding with the periodicity of the sub patterns, while the pitch reinforces
the original pattern creating a heterophony. Each note of the chosen melody is allocated
to each onset of the original pattern, the same notes are then allocated to the reminders
of the division. When there are equal values as reminders of different divisions and the
allocated note is different, this value appears in the sub pattern taking only one note
from the allocated notes at each repetition.

67
Juan Camilo Hernández Sánchez

Original pattern onsets 0 5 13 15 21/8 Sub pattern duration


Remainders 0 5 5 7 5
Notes C D E F G

Sub-pattern modulo 8 7
Onsets 0 5 7 0 1 5 6
Note C DEG F CG F D E

Figure 2. Heterophony over the rhythmic pattern appeared in the figure 1

3 Creation of a harmonic field


The Aschenblume’s harmony is obtained completely from two ”bell-like” spectra created
by physical modelling synthesis1 . The two instruments employed are a free and a clamped
circular plate; their spectra are inharmonic with a huge quantity of non-temperate par-
tials. In order to be used, the partials are approximated into quartertones and the clarinet
is tuned a quartertone lower. Tempered instruments such as the piano and some pitched
percussion instruments mostly play tempered notes; in the sections where more notes are
needed the harmonies are approximated into semitones.

Figure 3. Free circular plate spectra

1 This kind of synthesis permits the creation of ”virtual” instruments starting from their dimensions

and the physical properties of their material. The procedure is made possible by the Modalys software,
realized at the IRCAM. The composer Mauro Lanza created an interface to control the synthesis from
inside OpenMusic.

68
The Genesis of Mauro Lanza’s Aschenblume...

Figure 4. Clamped circular plate spectra

The harmonic field obtained is not intended as a musical representation of an acoustic


model in the spectral music manner, but to give a homogeneous sonority to the whole
piece. Therefore, the spectra becomes the ensemble of notes used in the piece and its
coherence of form is due to the organisation of sub-ensembles of partials. Some partials
have higher amplitudes depending on which part of the instrument’s register is sampled.
In order to create the partial sub-ensembles, a constraint tool is used to search the
instruments’ points at which there are fewer simultaneously sounding partials. Each one
of the points becomes the harmony of a section, and they are played either as chords
with their corresponding amplitudes or melodically as a scale for each instrument.
The points with common partials are used as the harmony of the related sections of
the piece, which gives them harmonic homogeneity.

Figure 5. Enumeration of chords obtained from the partials of different points of the instru-
ments

69
Juan Camilo Hernández Sánchez

4 Aschenblume musical materials


As we explained in the introduction, new material appears constantly throughout the
piece. A description of each section is necessary if the reader is to understand their
formal structure.

4.1 Material A
The piano presents a homo-rhythm in sixteenth notes accentuating points that shape a
melodic descent. The homo-rhythm accelerates while the accents decelerate, because the
periods between them are enlarged. This process creates a temporal paradox that will
be resolved as a sustained chord. All the other instruments underline the piano accents
in triplet subdivision, generating a small gap between each. However, the percussion
follows the same pattern as the piano accents. The melodic descent movement is applied
to the other instruments in different periodicities from that of the piano.
To create this material the composer developed a CAC tool, a patch that applies
the following process: the harmonic field is approximated into semitones and filtered by
an intervallic structure transposing in chromatic downward steps. Each transposition is
arpeggiated downwards avoiding the notes from of the harmonic field, thus creating an
irregular descent. There are two note chords accentuating points of a melodic descent.
The filtering process is carried out in the OM patch as well as upon the acceleration of the
homo-rhythm. Placing of accents is done manually by the composer. The following step
is the extraction of the onsets of the accentuated notes, in order to generate the rhythm
for the other instruments. A patch takes these onsets and approximates them into a
triplet subdivision, the resulting rhythm being allocated to the flute, clarinet, violin and
alto. The percussion accentuates in the same subdivision as the piano, approximately
following its melodic descent shape with bongos and congas.

Figure 6. Material A (Meas. 1)

70
The Genesis of Mauro Lanza’s Aschenblume...

The melodic descent for the other instruments is chosen by the composer from the
harmonic field assigning different registers to each instrument and allowing them to have
common notes. The number of notes of this descent is progressively reduced until each
instrument only has one sustained note, and this procedure leads to the incoming chord.
The melodic descent is orchestrated in such a way as to have three periodicity levels. The
violin and flute are always have a similarly patterned descent, and notes reduction. The
alto and clarinet play the descent in a periodicity that is different from the violin and
flute. Finally, the heterophony principle is used to allocate some pianissimo descending
notes in different periodicities for the flute and clarinet.

Figure 7. Melodic descents periodicities for Flute, Clarinet, Violin and Viola (meas. 1)

Throughout the piece the piano harmony evolves, changing the intervallic structure
that is transposed, while the other instruments remain harmonically similar to their first
appearance. The sustained arriving chord undergoes a harmonic enhancing metamor-
phosis; its dynamics and orchestration also evolve, using the amplitudes of the chords
extracted from the spectra as a model.

4.2 Material B
It is a rhythmic pattern built up with the original pattern and four sub-patterns, each
one in a different subdivision unity. The double bass and the vibraphone play the original
pattern in a regular pulse of 9 sixteenth notes. The piano has a second sub-pattern that
uses the same periodicity with an internal division, it doubles the vibraphone at each
assembling with the original pattern. The viola and the violin use a triplet sub-division
having a periodicity of 10 triplet eighth notes. The clarinet plays a sub-pattern lasting 3
quarter notes and its sub-division unity is the triplet. The flute and the violoncello have
a quintuplet sub-division and their periodicity is equal to one half note.
The harmony of this section is an orchestration of the 8th chord from the nodes
obtained in the spectra. The original rhythmic pattern is characterised by having the
fundamental (C[), the 3rd , and the 7th partials (Pno., Vbr.). The rest of the partials
are distributed among the other instruments, the voices, rhythmically assembled, use the
same notes. Therefore we may conclude that the original pattern is harmonically stable
while the sub-patterns are changing, a process that generates an internal evolution of the
material very important to its interpolation with the other bodies of material.

71
Juan Camilo Hernández Sánchez

Figure 8. Material B (meas. 51)

4.3 Material C
It often appears as a transition element, characterised by the rhythmic assembling of
all the voices, and the polarisation over the medium and high registers. It has a sextu-
plet melodic ascending figure built up in a scale whose intervallic structure is identical
throughout all octaves. As a result, it is the only section constructed using a harmony
out of the spectra.
All the voices begin in a different sixteenth note of the sextuplet, in order to create
polyphony, in which each sixteenth note should have the maximum possible number of
simultaneous notes from the scale.
The appearance of this material articulates the formal structure of the piece because
of how it it differs from the other bodies of material, especially with regard to its homo-
rhythmic polyphony and its ascending character.

4.4 Material D
Perceptually, this section is a reminder of the material A, the piano plays a similar homo-
rhythm accentuating the descending notes. The main differences lie in the polyphonic
and rhythmic treatment given to the other voices: polyphonically the differences lie in
the instruments playing in canon with the piano; rhythmically they arise from a common
sixteenth note unit division throughout all the voices.

72
The Genesis of Mauro Lanza’s Aschenblume...

Figure 9. Material C (meas. 125)

The computer tools initially apply the filter to the harmonic field polarising around
the two highest notes of the chord 2. The composer manually chooses the accents that in
this case accelerate until they become constant eighth notes, whereas the homo-rhythm
remains in sixteenth notes throughout the section.
The appearance of the instruments follows an asymmetrical four voice canon played
simultaneously by the violin and the viola, twelve quarter notes after by the flute and
the glockenspiel, the violoncello joins twenty-three quarter notes after the piano entry.
The rhythm is obtained from the piano accents that are taken as onsets and augmented
in different proportions for each instrument. The used proportions are irregular and are
approximated into sixteenth notes, which are applied as subdivision unit. The melodic
descent is also used in instruments that are perceived as an irregular echo of the piano
accents. In order to polarise the harmony completing Chord 2, it is formed by the gradual
reduction of the descending melody until each voice has just two notes.

Figure 10. Rhythmic structure of Material D canon

73
Juan Camilo Hernández Sánchez

The final part is the most developed element of this section; in the rest of the piece
it evolves becoming a constant sixteenth note pattern where the two highest notes are
alternated. Each note is harmonised with the chord notes played by each instrument with
its respective register in such a way that the highest notes of each voice are simultaneous.
The same is applied to the lowest notes. This fast alternation of high and low register
mostly emerges at the end of the piece, sometimes transposed or harmonically enriched.
In the formal scheme it is named D’.

Figure 11. Polarisation over Chord 2 in the Material D (Meas. 91)

4.5 Material E
Constructed with the rhythmic pattern tool, this section appears usually as a transition
between two sections. Using the chord 27 as its harmony, its principal characteristic is
the flute and the double bass sustaining the outer notes while the rest of the ensemble
plays the rhythmic pattern and sub-patterns.
The original pattern appears in the cowbells over the fundamental of the chord, it has
a regular periodicity of 7 sixteenth notes. The sub patterns are individual for each instru-
ment, which means that they are not doubled. The violin and the viola are sub-divided
in half note quintuplets, with a 9 and 8 quintuplet eighth note periodicity respectively.
The violoncello and the clarinet are subdivided in triplets and their periodicities are 11
and 12 triplet eighth notes. The piano does not have a regular periodicity; its role is to

74
The Genesis of Mauro Lanza’s Aschenblume...

provide a link to the previous section.

Figure 12. Material E (Meas. 137)

4.6 Material F
A contrasting element appears with this material, which consists of a rhythmic unifica-
tion of the entire ensemble undergoing a deceleration. This is a very short section that
introduces a new element, by way of dynamical and orchestration contrast. Therefore, it
has great importance in the formal articulation.
The tool develops this material by expanding a spectrum that is contracted in the
lower part of all registers; the computer contracts the given chord in the lower register,
subsequently, the chord is gradually transposed towards its original register. The rhyth-
mic deceleration is a simple interpolation between two rhythmic values, always begining
with sixteenth notes, and the final value depending on the following section. The result-
ing rhythm has a regular pulsation gradually transformed into syncopations, creating a
special tension and introducing a new element.

4.7 Material G
It is similar to Material C in many aspects: it has a homo-rhythmic quintuplet subdivision
unit for all the playing instruments; each instrument begins at a different note of the
quintuplet, creating the same polyphonic effect that appeared in Material C.

75
Juan Camilo Hernández Sánchez

Figure 13. Material F (Meas. 163 )

The main difference is harmony. In this case the ascending movement occurs over
a B[ arpeggio, the clarinet part distorts the harmony with microtonal neighbour notes
that are gradually eliminated to reach a sustained B[ unison in all instruments. It is also
a reminder of the disintegration that occurred in Material A.

Figure 14. Material G (Measure 167)

76
The Genesis of Mauro Lanza’s Aschenblume...

The treatment of timbre is unique to this material because of the absence of piano and
pitched percussion, as well as the use of string harmonics amalgamated with woodwinds
in the high register.

4.8 Material H
This section is characterized by a rhythmic pattern. The double bass rasps out the lowest
note of the spectra and the flute plays the highest note in this section, accentuating the
original pattern, which has a periodicity of 8 quintuplet eighth notes. The violoncello
plays a sub-pattern with the same periodicity as the original, whereas the melodic treat-
ment generates an internal subdivision. The viola also accentuates the original pattern
in sixteenth notes creating a little gap.
The triplet subdivision is applied to the violin, clarinet and piano, the latter being
doubled by a glockenspiel. Each instrument has a different periodicity: 5 eighth triplet
notes for the violin, 21 eighth triplet notes for the piano, clarinet and glockenspiel.
Chord 30, obtained from the clamped circular plate, is used in this section with a
strong emphasise on E. The highest notes are placed just after the original pattern accent.
As a consequence, iambic polyphony is created with the short note in the low register
and the long one in the high register. This idea is used as a principle to engender and
develop new materials such as D’ and J.

Figure 15. Material H (182)

Two minor materials appear as a variation of Material H. The first one (H’) presents
an acceleration the original pattern of Material H and a change of its subdivision unity
into sixteenth notes, the main similarity being the continuous gathering of the double

77
Juan Camilo Hernández Sánchez

bass and the flute, even if the raspy E note played by the double bass subtly becomes a
harmonic two octaves higher, recalling Material B. The second one (H”) seems to form
a hybrid with Material E, because of the appearance of an accentuated C sharp note in
cowbells, double bass and piano. However, the harmonic in the double bass and iambic
character link it with Material H.

4.9 Material I
In this material the heterophony is much more flagrant, played over a descending melody.
Not all the instruments play it in unison but in microtonal approximations, yielding a
richer sonority. Only the double bass and the cowbells play the structural melody over
the original pattern, the periodicity of each instrument is independent and is constantly
changed at each appearance of the material.

Figure 16. Heterophony in Material I (272)

4.10 Material J
Using a lot of notes from the free circular plate spectra, the lower partials are grouped as
a sort of cluster strongly attacked by the piano and lower strings with a Bartók pizzicato.
Immediately after this, a longer high chord ensues and develops the iambic idea set forth

78
The Genesis of Mauro Lanza’s Aschenblume...

in Material H. The high notes will gradually be extended in a melodic descent or in


overblown descending notes. Normally attributed to the flute, here they are played as
a strings glissando. This procedure is often applied to create interpolation with other
material.

Figure 17. Material J (291)

4.11 Interpolation
Interpolation is a transition between two musical situations. In this piece it becomes a
very important bridge between bodies of material. Elements from the incoming section
are mixed in with elements from the outgoing section. This gathering process is gradual
and leads to the complete transformation of the outgoing section into the incoming one.
A different kind of interpolation occurs, depending on the characteristics of the sections.
Some specific examples are presented below to explain how the material is assembled.
In Measure 121, interpolation occurs between Material A and Material C. The proce-
dure starts when the accentuated notes of Material A are followed by sextuplet ascending
notes. It begins by adding the first sextuplet note to Material A, then grows note by
note into a complete ascending scale of Material C in each instrument.
Another striking example appears in Measure 286, in an interpolation between Mate-
rial C and J. One note is subtracted from the ascending scale of Material C to only keep
two notes, one lower than the other. The orchestration is reduced to three instruments:

79
Juan Camilo Hernández Sánchez

Figure 18. Interpolation between Material A and Material C (121)

a violin, clarinet and viola. Each begins at a different beat of the sextuplet, the rhythm
decelerates and the lower notes gradually assemble at the same point of beat and the
higher ones at the subsequent point of beat. Simultaneously, the interval between the two
notes grows in a contrary motion, while the rest of the ensemble appears and reinforces
the process rhythmically and melodically up to the arrival of material J.
Yet another interpolation type is to be found in Measure 232 between Materials J
and D. In this case, after the attack over the high notes of Material J, the descending
intervallic structure of the piano in Material D gradually lengthens, as do the flute and
string canons.

5 General structure of the piece


The original project of the piece is to gradually reduce the length of the materials. This
process is not always regular because the composer has the subjective time perception
of each sub-section in mind. The length of each sub-section depends on the quantity of
material played in them, as well as how they are connected.
The structure of the piece possesses a special unity. It also has a formal hierarchy
built in three different levels of segmentation. Firstly, the form of the piece is segmented
into three, each segment being characterised by specific material and processes. Secondly,
there are sub-sections that bring together elements of material that do not contrast with
each other, either because there of a succession of similar material, or because there is
interpolation among the elements. Finally, a local level is marked out by the incoming
material, previously described in Aschenblume Musical Materials.
The formal schemas presented below show the three levels of segmentation. The main
section is divided in sub-sections. The measures at which they occur and their duration

80
The Genesis of Mauro Lanza’s Aschenblume...

Figure 19. Interpolation between Material C and Material J (286)

Figure 20. Interpolation between Matériel J and Matériel D (232)

in ratios are specified underneath. The third level shows the materials of each sub-section
with arrows between those that are interpolated. Here is an example of their schematic
representation.

81
Juan Camilo Hernández Sánchez

Section
Measures
Sub-sections
Measures
Duration in ratios
Materials

Table 1. Formal scheme of the piece

1st SECTION 1-162


1 2 3 4 5
1-19 20-41 42-67 68-77 78-93
74/4 83/4 + 1/8 102/4 40/4 763/4 + 1/8
A →→→ A’ A →→→ A’ A → B → A’ A→ C D

1st SECTION 1-162


6 7 8 9
94-110 111-119 120-129 130-137
67/4 + 1/8 36/4 40/4 32/4
D →→ A →→ A’ A →→→ B A →→→ C D →→→ B

1st SECTION 1-162


10 11
138-153 154-162
64/4 35/4
E →→→ A’ A →→→ E

Table 2. Formal scheme of the first section

5.1 1st Section (1-181)


It shows the first disintegration process undergone by Material A, which is repeated once
without any modifications. At its third appearance the materials B, C, D and E are
gradually introduced by interpolation inside the process. This first section is divided
in 11 sub-sections, each of them having interpolation between two or three bodies of
material.
The initial disintegration process of A can be time stretched, from the 3rd sub-section
where B is introduced, then the process occurs from the 4th to the 6th sub-sections and
finally from the 7th to the 10th . The time reduction of sub-sections is irregular, due to
the introduction of new materials within. Nevertheless, there is more material in less
time, which means that they are gradually undergoing a time reduction.

5.2 2nd Section


The presentation of Material F articulates the piece. After it appears, the process changes
and the material made up by rhythmic patterns such as B, E, I, H occur frequently. They

82
The Genesis of Mauro Lanza’s Aschenblume...

2st SECTION 163-314


12 13 14 15 16
163-169 170-181 182-184 185-191 191-192
F→→→ G D→→→ C H B→ C H H’ → A’

2st SECTION 163-314


17 18 19 20
203-207 208-212 213-238 239-258
A →→→ I B →→→ C F → C → J-D F D→ C→ J

2st SECTION 163-314


21 22 23 24 25
259-263 264-271 271-275 275-281 282-292
H’ E K → → I B →→→ I B I D F→ C→ J

2st SECTION 163-314


26 27
293-300 301-314
F →→ A J F H H’ K →→ J →→ J’

Table 3. Formal scheme of the second section

are generally bridged by short appearances of the material elements developed in the first
section.
The sub-sections become gradually shorter and the quantity of materials interpolated
inside them increases. This means that the materials undergo a noticeable reduction in
length. The main process after Material F is the assembling of materials that possess
special tension, released over louder materials such as H or J at the end. Between sub-
sections 12 and 15, it finally achieves a total release over material A’. Its duration is
reduced gradually throughout its appearance inside the sub-sections 19, 20, 25, 26 and
27.
Between sub-sections 21 to 24, the materials with rhythmic patterns quickly succeed
each other, and finally arrive at material I, which generates a sort of rhythmic and melodic
unification of the ensemble by its hetereophonic character. This unification is perceived
as a point of release for the entire piece.

5.3 3rd Section


The length of the material is reduced so as to complete contract the material and bring
it together in a single entity that possesses characteristics from all. The piece finishes
with a strong and regular pulsing beat, the result of the total contraction of all elements.
The sub-sections are no longer than 31 quarter notes, which are quickly reduced to
an average of 4. When this happens, there is only one body of material left in each sub-
section, strongly contrasted with the others. Consequently this section is divided into
several smaller sub-sections that are juxtaposed until maximum reduction is achieved.
Finally in measure 417 a new body of material arises out of this contraction process. It

83
Juan Camilo Hernández Sánchez

possesses a rhythmic pattern, and all the instruments have the same subdivision unit. The
constant pulsation is progressively announced by piano clusters and a Bartók pizzicato,
both recalling Material J. It finally arrives in measure 427 and creates a tension that is
suddenly stopped.

6 Conclusions
As mentioned in the introduction, the purpose of this article is not the detailed description
of technical procedures used when programming OpenMusic tools. Rather, this article
emphasizes the remarkable advances in algorithmic procedures, in obtaining expressive
and artistic results. Composers generally employ constraints as a tool for composition,
however in Lanza’s piece the use of the computer accelerates the compositional process.
The generation of hierarchical rhythmic structures is a new development in the subject
of Computer-Assisted Composition. The composer has managed to open a new avenue of
exploration through the control of a rhythmic generator using melodic constraints, and
has achieved significant results.
The value of this piece may also lay in the fact that the formalisation principles are
perceived as homogeneous and musical, in turn shedding light upon the global formal
structure. The present article has described these principles and the ways in which
they are applied with each body of material. The purpose is to point out CAC the
forethought that was applied when developing a model for programming tools. The
model then became part of the musical language that the composer has used for more
recent pieces. It is fair to say that CAC procedures are developed as an extension of the
composer’s musical language.

84
Juan Camilo Hernández Sánchez
Born in Bogota, Colombia in 1982. He began by playing tra-
ditional Colombian music, moving to jazz, rock and classical
music. He studied composition for two years at Javeriana
University with Harold Vasquez C. and Marco Suarez. He
won the Colombian Cultural Ministry National Competi-
tion which allowed him to come to France to continue his
studies. In France he has studied with Jean Luc Hervé at
Nanterre Conservatoire and Evry University where he was
awarded a ”Maitrise” for his paper Formalization In The
Compositional Process. In 2003 he studied at the CCMIX
(Ianis Xenakis Music Creation Centre) with Gerard Pape,
Bruno Bossis, Jean-Claude Risset, Curtis Roads, Agostino di Scipio among others
(composers as well as software developers). He has also studied with Brien Ferney-
hough, Luca Francesconi and Philippe Leroux at Royaumont Foundation where the
piece Anéantir was premiered. Since 2004 he has studied composition with Philippe
Leroux at Blancmesnil National School.
His pieces have been played mostly in the Forum De La Jeune Creation organized by
the New Music International Society, 2002 Sin Aliento, 2003 Vestiges du rêve, 2004
Eblouir, as well as at the CCMIX in the Rencontres Dı́Automne Transmutación
cinemática.

85
Generating Melodic, Harmonic and
Rhythmic processes in ”K...”, an
Opera by Philippe Manoury
- Serge Lemouton -

Abstract. In order to provide some elements for musical analysis, we shall describe
here how and why Philippe Manoury makes use of Computer Assisted Composition in
writing ”K...”, one of the first operas of the 20th century.

***

1 Introduction
Philippe Manoury is mostly known in the musical computing field for his research and
innovation in real time computer music. In the eighties, he collaborated closely with
Miller Puckette to develop realtime music software systems. The collaboration resulted
in the first Macintosh version of Max in 1988. This research has never ceased, and lead
to the Sonus ex Machina cycle (Jupiter 1987, Pluton 1988, Neptune 1991, La Partition
du Ciel et de l’Enfer 1989) and En Echo 1993.
In his first opera Soixantième Parallèle (premiered at the Theatre du Châtelet in
1997) he also used realtime electroacoustic music elements.
”K...”, his second opera (after ”the Trial” by Franz Kafka), was commissionned by
the Opera de Paris. It was premiered at Opera-Bastille in March 2001, and played again
in April 2003. The electronic part of this opera is much more developped than that in
Soixantième Parallèle.
Manoury’s style uses post-serial writing techniques. These constraints or rules of
composition can be easily automated.
While composing ”K...”, Philippe Manoury asked me to implement his melodic, har-
monic and rythmic generation systems into Computer Assisted Composition software.
OpenMusic speeds up the automatic generation of this material on computer, a daunt-
ing operation usually done ”by hand”. The very simple OpenMusic patches shown in this
article have therefore been used to create musical material that was printed in several
bound copy-books, containing reservoirs of melodies, chords and rythms from which the
composer drew while writing the score.

87
Serge Lemouton

2 Generating rhythmic material


A major rythmic character in ”K...” is made up of the following proportional durations
series:
2153813214
if the sixteenth note is taken as the basic unit, this numerical series generates a rythmic
sequence. OpenMusic allows us to represent this rythm in musical notation (figure 1).

Figure 1. Rhythmic material

Figure 2. Rhythmic material generation patch

The OM patch (figure 2) operates permutations of groups of 2 or 3 duration values,


using the following groupings of the numerical series:
(2 1 5) (3 8) (1 3) (2 1 4)

The result of all the permutations is a set of 24 different rythms (figure 3).

88
Generating Melodic, Harmonic and Rhythmic processes in ”K...”

Figure 3. Rhythmic permutations

These rythms are then displayed using the 9 following different base duration values
(tempo 60 MM).

sixteenth note 250 ms


triplet eighth note 333 ms
quintuplet quarter note 800 ms
quintuplet sixteenth note 200 ms
quintuplet eighth note 400 ms
eighth note 500 ms
sextuplet sixteenth note 167 ms
triplet quarter note 666 ms
quarter note 1000 ms

An actual application of these OpenMusic generated rhythms can be found in the


opera’s prologue, in the string section, from bar 11 to 51 (see fig 4). Here, the same
rythmic processes are presented simultaneously on different time scales (Contrabasses :
quarter notes, alti : triplet eigth notes, violin II : sixteenth notes). This section is followed
(from bar 52 to 80) by another polyrythmic presentation (Contrabasses : triplet eigth
notes, cellos: triplet eigth notes, alti : sixteenth notes, violins : triplet quarter notes, see
figure 5).

89
Serge Lemouton

Figure 4. Excerpt from the orchestra score of ”K...”’s prologue (bar 21-25)

Figure 5. Excerpt from the reduction for piano of ”K...”’s prologue

90
Generating Melodic, Harmonic and Rhythmic processes in ”K...”

3 Melodic generation and serial inlayings


The pitch organization of the whole opera is based upon four main series of 7, 8,6 and 7
notes respectively. (see figure 6).

Figure 6. Series a,b,c and d

Figure 7. Patch generating the different forms of a serie

91
Serge Lemouton

What Manoury calls a process of inlaying (incrustations) is applied to this series. This
process consists in searching, within the 48 canonic forms of each series (The transposed,
retrograde and inversed forms are generated in OM by the 4 ser-op functions in the patch
figure 7) all the occurences of all the intervals present in the original series. All the forms
containing this interval are then selected. The intervals between all the notes of the series
of taken into consideration (and not just consecutive pitches) : 7 notes yield 21 intervals
(formula : N * (n-1)/2). In the printed book of material generated by OpenMusic, the
selected series are vertically aligned to so as to clearly display the interval in question
(see figure 9). The same process is applied to the 3 other matrix-series, (resp. 28, 15 and
21 incrustations).

Figure 8. Serie c, inlaying on mi/sol

Figure 8 shows another page of the printed notebook in which P. Manoury was able to
choose a form of series b containing the interval E-G. This process allowed the composer to
find classes of transposition with intervals in common, and to create an atonal equivalent
of tonal modulations.
In the first scene of the opera (when the inspector tells the unfortunate K. that he
is under arrest), the vocal parts are based on the E/A inlay. This pivot-interval is first
presented by the inspector when he shouts: ”Josef K. ?” (see figure 10). Then from bar
240 to bar 252, the vocal lines follow the series presented on page mi/La of the notebook
(refer to line 1 to 11 of figure 9). This allows the composer to move through various
series while maintaining focus on the emblematic fourth interval.
Note that the accompaniment of the vocal part is built on the duration value series
(2 1 5 3 8 1 3 2 1 4) shown above. In the trumpet calls, symbolising the judicial tribunal
(scene 4 of the opera, see figure 11) is to be found another example of the serial inlaying
process, in which the same series is polarized around another emblematic interval, the
fifth E-B (present in the original form of series b).

92
Generating Melodic, Harmonic and Rhythmic processes in ”K...”

Figure 9. Serie a, inlaying on mi/la

93
Serge Lemouton

Figure 10. Excerpt from scene 1 of ”K...” (piano reduction by the author)

Figure 11. Beginning of ”K...”’s scene 4 (manuscript)

4 Ornamentation
The OpenMusic patch shown in figure 12 generates very long melodic lines starting from
a 27-note series constructed by the concatenation of the four above-mentioned series (see
figure 6).
ser-op is a function from the Combine library, written by M. Malt for B. Ferney-
hough. This function generates the inversion of the matrix-series, with sol2 (midi55) as
a pivot-note. The pma-mul abstraction calculates classified chord multiplications. The
transpositions of this series from its inversion (diagonal pivot-note) are joined end to end

94
Generating Melodic, Harmonic and Rhythmic processes in ”K...”

Figure 12. Patch generating the diagonal transpositions of a series

Figure 13. Excerpt from the orchestra score of ”K...”’s prologue (bar 21 to 25)

in order to form harmonic compounds. The linking of these transpositions is grouped


using the 2 looped density series:

(4 1 14 9 5 12 1 16 8 9 12 9 16 16 5 12 1 14 7 5 8 13 1 14 15 21 18 25)
(4 1 2 9 5 1 4 8 9 1 9 4 5 1 2 7 5 8 1 2 9 6 1)

The same process is applied using E, G sol and E flat as pivot-notes, then on the
reverse version of the matrix-series. This process is used, for instance, in the wood-wind
ornementations (groupe-fusées) that can be heard throughout the opera’s prologue (see
figure 13). The number of notes of each group follows the density list shown above.

95
Serge Lemouton

5 Conclusion
Philippe Manoury used Computer Assisted Composition in a very pragmatic way. His
goal was the automatic generation of musical material. Analysing this process may
provide avenues of approach and clues to anybody interested in the study of Philippe
Manoury’s methods of composition.

Figure 14. The courtroom scene. Director: André Engel. Scenography: Nicki Rieti

96
Serge Lemouton
Serge Lemouton was born in 1967. After studies in
violin, musicology, harmony and counterpoint as well
as composition, he specialised in various computer
musique domains at the Sonus Department of the Con-
servatoire National Supérieur de Musique de Lyon.
Since 1992, he has been music assistant at the Ircam.
This has enabled him to collaborate with Ircam re-
searchers in developing computer musique tools, and
he has taken part in the musical projects of numer-
ous composers, among them Michael Levinas, Mag-
nus Lindberg, Tristan Murail and Marco Stroppa. He
has been in charge of realtime creation and performance in the operas of Philippe
Manoury, ”K”, and ”La Frontère”. He is also teaching assistant at the Paris VIII
University.

97
Composing the Qualitative, on
Encore’s Composition1
- Jean-Luc Hervé and Frédéric Voisin -

Abstract. In this paper, we will present the use of CAC in the writing of the electro-
acoustic parts of Encore, for ensemble, two Disklaviers and live electronics. Jean-Luc
Hervé, the composer, and Frédéric Voisin, the musical assistant, have experimented with
new features of OpenMusic software using LISP language. The concept of musical gesture
has been developed a ”top-down” perspective to generate soloing.

***

Encore is written for 18 musicians, electronics and two Disklaviers, one of which is
tuned one quarter of a tone lower than the other. The Disklaviers were chosen in the
instrumentation for two reasons. First, a Disklavier, being a mechanical instrument, has
a presence of its own, quite different from a musician’s and from sound diffused through
loud-speakers as well. This oddness itself creates a slight change in the usual concert
scenography that enhances the sharpness of the listener’s perception, and provokes more
careful listening.
The second reason is the Disklavier’s peculiar quality of being at the same time an
orchestra instrument and able to receive MIDI data in order to diffuse directly a musical
material composed with the help of computers. Therefore, writing the Disklaviers’ scores
is closer to an electroacoustic work, than writing for an instrument, and the fact that the
limits of an interpreter are not to be taken into consideration increases that closeness.
It was possible to think of the Disklaviers’ scores as a direct work on sound and not as
plain instrumental scores.
In Encore, the Disklaviers make it possible to cross the two usual modes of emitting
sound: acoustic instruments and loud-speakers. Here, an acoustic instrument diffuses
computer-made sounds and musical materials. It was this ability of being at the same
time an instrument and an electroacoustic tool - the possibility of adding to the two usual
modes of emitting sound a third one, a hybrid of both - that interested us.
We composed the Disklaviers’ MIDI scores with OpenMusic. When we were working
on the creation of Encore in the studios of the IRCAM, the kernel of OM was still being
developped. We experimented with the early versions and were interested in particular
by the possibilities of meta-programming they offered.
A LISP program was conceived to create MIDI sequences with the possibility of
organizing sound ideas - compositionnal gestes - without having to elaborate them from
the ”physical” musical material (pitches, intensities...). The program had to create the
details of the sequences from macro-structural data. However, although developing a

1 Encore was created at the Centre Georges Pompidou, Paris, on June 10, 2001.

99
Jean-Luc Hervé and Frédéric Voisin

material of pitches and durations, starting from a lower structural level (ascending by
succession of embeddings) is natural with OM , the reverse operation, descending, is much
less developped. The idea was to be able, in the first place, to construct ”prototypes”
of compositionnal gestures - musical situations foreseen in a global way that would be
then upgraded with the program, in terms of pitches, durations and intensities. The
Disklaviers scores in Encore , the ”gestural” musical sequences (textures), were thus built
upon three basic gestures: an ascending stroke, a descending stroke, and a repeated note.
We first started this approach by experimenting with a ”natural language” interpreter.
Constructing a syntax interpreter and a dictionnary that associated words with LISP
functions allowed us to generate digital musical sequences based on propositions that
were expressed in a human language. The LISP language was perfectly suited for this
experience. For example, the following statement: ”repeat a descending line, by tones,
from fa]3 to do3, decrescendo” was translated into a corresponding MIDI notes sequence,
according to an arbitrary grammar (recursive syntax rules) and a dictionnary, calling
LISP methods, that could possibly be redefined (see figure 1).

Figure 1. Generating a musical sequence by means of statements expressed in ”natural” lan-


guage

These basic gestures are meant to be articulated with each other. But unlike a plain
combinative or juxtaposition, the change of a gesture within a sequence might have
an incidence on the other gestures’ upgrades, upstream and downstream as well. For
instance, an ascending-descending movement is different from an ascending movement
followed by a decending one, since there is a common point (a pivot). It was appropriate
to be able to apply the following principle: two juxtaposed gestures put side by side
can be linked by a relation- established in a linguistical proposition - that modifies the
instantiation of its own components. Therefore, in this same example, the pivot-note
though belonging to both the ascending and descending stroke, should be produced only
once, as long as the two strokes are juxtaposed and explicitly linked (see figure 2).
In OM, it was appropriate to adapt the linguistic steps to a graphical musical de-
scription of music, comparable to the composer’s. For this, we developped a function
(strait) that calls, if needed, either a human language interpreter or arbitrary LISP
functions (”lambda” functions, possibly) that interpret graphic objects, notes, chords,
temporal objects, lists and links, in complement to other linguistic indications.

100
Composing the Qualitative, on Encore’s Composition

Figure 2. Generation

Figure 3. ”Strait”, the LISP function that enables to create the basic gestures

In Encore, a gesture could be defined by the ambitus, harmonic color, rythmic mate-
rial, dynamic profile and the duration. The ambitus is made of two notes, a starting note
and an ending note, represented in a ”chord” object that determines also the gesture’s
type. If the ending note is higher than the starting one, the gesture is an ascending
stroke, in the reverse case it is a descending thread, and if the notes are the same, the
gesture is a repeated note. The harmonic color is given by a harmonic field, a list made
of notes that will be played in a preferential way. If no harmony is indicated, like in
figure 3, notes will be calculated without harmonic constraints.
In the whole Disklavier’s score, the rythmic values are very quick and constitute the
gesture’s ”grain”. If speed is very high, the impression will be of a gesture with a slight
and very rapid ondulation. If it is slower, the gesture will have a more abrupt aspect.
If speed is not regular but repeats a rythm with rapid values, this rythm will give the

101
Jean-Luc Hervé and Frédéric Voisin

”grain” a special characteristic.


The dynamic profile is given by the shades of every note in the stroke. Last, the
duration of the stroke is determined by the object’s length when it is put in a maque-
tte and becomes a temporal object. This is where the program creates the notes that
correspond to the graphic or linguistic descriptions, and effects a ”gesture” according to
its contextual and therefore temporal charecteristics. Figure 4 shows a gesture resulting
from the datas of figure 3.

Figure 4. The ”Strait” function put into a maquette

Time in seconds is shown on the x-axis, and the y-axis has no significance. Here, the
gesture lasts a little more than 2 seconds (figure 5). Each voice (channel) commands a
Disklavier: channel 1 commands Disklavier1 (tuned normally) and channel 2 Disklavier2
(tuned one quarter of a tone lower).

Figure 5. The resulting transcription

If one of the datas used to construct this gesture is modified, the program will re-
calculate the stroke. This allows to vary a gesture upon one of its own charecteristics
while maintaining all of the others.
Figure 6 shows the ascending stroke in figure 4 if the speed of its notes is changed.
Duration is the same (2 seconds/ 2 temps à 60 ), but the number of notes is doubled.

102
Composing the Qualitative, on Encore’s Composition

Figure 6. Gesture of figure 4 with higher speed

If a harmonic field is specified (see figure 7), the notes in the stroke will be in part
different.

Figure 7. Example of figure 4 with a harmonic color (here, an A spectrum)

103
Jean-Luc Hervé and Frédéric Voisin

The program will write the notes between the ambitus’ two limits so as to fill the
interval of time that is determined by the length of the temporal object and in accordance
with the given speed. It will divide the interval between the first note and the last one
in equal parts and the resulting notes will be approximated to a quarter of a tone. The
harmonic field will modify this approximation in order to obtain as many notes that
belong to the harmonic field as possible. The stroke has the same duration and number
of notes as in figure 4 but the notes were approximated in a different way: the stroke goes
this time through les harmonics 9, 12, 13, and 14 of a harmonic spectrum of fondamental
Do (33Hz).
If duration is lengthened by stretching the object in the maquette, the program adds
automatically notes to the stroke in order to maintain the grain given by the notes’ speed
(see figure 8).

Figure 8. Example of figure 4 stretched

The gesture has the same characteristics as figure 5 but has been streched graphically
(click and drag) in the maquette to 1 additional second (see figure 9).

Figure 9. The resulting transcription

These last 3 examples are 3 different realizations of the same gesture. This way, a large
variation of the 3 basic gestures (ascending, descending, repeated) could be reached, and
once combined, juxtaposed and/or bound, these gestures could generate a lage variety of
musical sequences.
The combination could be done by superposing, juxtaposing or sequencing gestures.
With superposition, the resulting gestures are simple but with a more complex harmonic
and rythmic substance.

104
Composing the Qualitative, on Encore’s Composition

In a maquette, sequencing gestures is different from simply juxtaposing them, because


of the link between two ”Strait” temporal objects.

Figure 10. Multiple gesture (made by sequencing)

In sequencing (figure 10), as we have already seen, the properties of a gesture must be
”echoed” and taken into consideration while interpreting the following gesture, in order
to achieve real binding, unify the resulting gesture and thus adapt every initial part of it.
We also realized that it was necessary at times, depending on the nature of the bond, to
”reverberate” properties of upstream gestures in downstream ones. In other words, not
only did we have to adjust the preceeding gesture in accordance with the following one,
but we had to go backwards in the LISP chain of evaluation (we probably reached at
that moment a certain limit in OM where in a maquette, the chain of evaluation depends
on time-axis, from left to right)

Figure 11. Multiple gesture (made by juxtaposition)

105
Jean-Luc Hervé and Frédéric Voisin

In order to make the modifications of a multiple gesture easier, the data is put together
in a ”material distributor” (top most box in the maquette, figure 11) created graphically
in OM , that determines - depending on the bounds - the starting and ending notes,
harmony, rythm, and speed of every gesture, and their shades. Other boxes represent an
individual gesture made with as many ”Strait” temporal objects.
In the special case of identical starting and ending chords, the result is a texture of
repeated notes within the same harmony. In the reverse case, when they are different,
chords are interpreted by the ”Strait” function depending on graphic and/or linguistic
descriptions, located in the ”patches” (temporal objects, represented here, figure 12, by
the generated sequences).

Figure 12. Texture of repeated notes

In figure 13, the word motif is interpreted by the ”strait” function as a succession
of notes that will form a pattern. This pattern is to be repeated as long as the temporal
object , in the maquette, will be defined by hand and/or by the resulting interpretation
(calculation). If several patterns with different speeds are superposed upon the same
harmony, the texture will be different from the ones made with repeated notes.

106
Composing the Qualitative, on Encore’s Composition

Figure 13. A texture composed by superposing different patterns, running through all the
notes of the chord at different speeds

By sequencing in succession the superposition of gestures, we were able to build


bigger musical sequences. At the end of Encore, there is a long Disklavier cadence, made
of maquettes nested into each other on several levels.

Figure 14. Maquette of the whole sequence

107
Jean-Luc Hervé and Frédéric Voisin

The maquette of the whole cadence shows a succession of 10 objects corresponding to


the 10 phrases of the cadence (see figure 14).

Figure 15. Maquette of phrase 1

Each phrase-object itself is a maquette that contains the sequencing of all of the
phrase’s gestures. Each object-gesture in the maquette of a phrase is also itself a maquette
that contains the stacking of the basic gestures (see figure 15). These basic gestures are
temporal objects containing a strait module.

Figure 16. score trancription of phrase 1

108
Composing the Qualitative, on Encore’s Composition

Thanks to the possibility of encasing maquettes into other maquettes on several levels
given by OM, we were able to build complex musical sequences and thus compose all the
Disklaviers’ cadence.
The follow-up to this studio-work of creation, was to expand the ”Strait” tool, regard-
less of the properties Encore required. It was to develop a more general composition tool
that would not only realize MIDI scores, but also musical sequences based on sound syn-
thesis (CSound, Max-MSP...). This problem is wholly different in nature: the need is no
longer to create a discreet approach such as MIDI encoding, but to integrate continuous
data. In order to have a wider range and to be of interest to both instrumental music
and electroacoustics, we have considered gestures as the basic elements of musical con-
struction, because a note is nothing more than an amorphous and static point, whereas
a gesture is already animated by movement. We therefore imagined a generalisation of
the class note in OM, based on the notion of gesture. A class gesture has been defined,
as a curve describing a course between an initial and a final state, like a change of state
and a way to go through this change of state. For example, regarding the pitches field, it
would be a sliding of pitch (upwards or downwards)and the curve (linear, exponential or
any curve that can be described) to carry out this sliding. A new class ”gesture” has been
defined in OM : it is not an augmented ”chord-seq” as with ”Strait”, but a special BPF
(breakpoint function) with which several behaviour types are associated. The class note
is thus expanded and includes the notion of gesture. A gesture object (a BPF describing
each dimension) is assigned to each parameter of a note (pitch, intensity etc). When a
gesture is defined by a horizonal line, it means this is a ”traditional” note. Access to the
continuous domain enables us to expand this note, its articulations and developments by
means of synthesis programs (such as CSoundor Max-MSP) and their related protocoles
(SDIF or TCP-IP).

109
Jean-Luc Hervé
Jean-Luc Hervé was born in 1960. He studied com-
position at the Conservatoire de Paris with Gérard
Grisey. He was composer in research at Ircam and has
obtained several residences (Villa Kujoyama - Kyoto,
Daad - Berlin). His orchestral piece, Ciels, was the
winner of the Goffredo Petrassi prize in 1997.

Frédéric Voisin
Frederic Voisin was born in 1966. In 1980
he discovered the ancestor of the sea siren
(Alitherium Schinizi) in an exhibit at the
Natural History Museum of Venice. Before
becoming a researcher in ethno-musicology
at the LACITO-CNRS from 1989 to 1995,
Voisin studied musicology at the University
of Paris IV and Inuit linguistics at the In-
situte National des Langues et Civilizations
Orientals. He carried out research in central
Africa with Simha Arom on the Aka Pygmies
and then on Java, Indonesia. There he produced the first in situ musicological
experiments using sound synthesis using a DX7 and computers he had brought along.

Voisin has been a musical assistant at Ircam in Paris since 1995 in addition
to taking part in artistic activities at CIRM and ArtZoyd. Frederic Voisin has
worked on the computer production of several pieces in collaboration with the
composers Daniel d’Adamo, Jean-Louis Agobet, Jean-Baptiste Barrière, Heiner
Goebbels, Jean-Luc Hervé, Horatiu Radulescu, Philippe Leroux, Martin Matalon,
Emmanuel Nunes, François Paris, Roger Reynolds, Fausto Romitelli, Atau Tanaka,
Kasper Tœplitz, Giovanni Verrando, and Iannis Xenakis. He has also collaborated
with the choreographers Myriam Gourfink, Rachid Ouramdane as well as the
producers Maurice Benayoun and Peter Greenaway. Since 2003, Frederic Voisin has
been the head of research at CIRM in Nice where he is working on musical research
and artificial intelligence (Neuromuse Project).

111
Navigation of Structured Material in
”second horizon” for Piano and
Orchestra (2002)
- Johannes Kretz -

Abstract. Several aspects of computer aided composition (CAC) were combined


in the realisation of ”second horizon” for piano and orchestra1 : modelling of physical
movements was used to generate organic, physically plausible gestures. ”Constraint pro-
gramming” made it possible to shape these organic - but vague - gestures into precisely
structured, consistent musical material with strong inner logic. The ”spectral approach”
helped to define rules for obtaining ”well sounding” solutions from the given material and
provided tools for composing microtonal pitches. Finally constraint programming again
was used to implement some (basic) tools of computer aided orchestration. This article
describes the promising potential of the interaction of all these techniques in the frame
of the OpenMusic environment. The techniques presented highlight the fact that CAC
is moving far beyond being ”algorithmic” or ”mechanical”. The approach of ”composition
through interactive models”, as developed in the international composer’s group PRISMA2
Helps to create structures that can carry ... emotions!

***

1 Challenges
One of the particular challenges of composing is (and always was) finding the balance
between simplicity and complexity, or, what is even more difficult, the creation of a piece
of art that can be perceived at various levels of attention, for example in such a way,
that the most obvious level (the surface) can be understood and enjoyed easily without
special knowledge (even intuitively), but other, deeper, less obvious layers of the work
provide more subtle contents satisfying the more sophisticated tastes of connoisseurs.
On the other hand, it turns out to be particularly difficult to formalize such a process
of creation, where various structural layers of material from the surface to the inner
structures have to interact permanently, and where various parameters (pitch, time) in
various layers of complexity (melodies, chords, rhythmic cells, formal parts etc) influence
each other. Finally the use of both, simple functional programming techniques as well

1 ”second horizon” was commissioned by the Swiss foundation ”Christoph Delz” as part of an interna-

tional competition for orchestral works.


2 ”Pedagogia e Ricerca Internazionale sui Sistemi Musicali Assistiti” with Jacopo Baboni Schilingi,

Hans Tutschku, Paolo Aralla, Nicola Evangelisti, Giacomo Platini, Michele Tadini, A. Sandred, Frederic
Voisin, Kilian Sprotte, the author and many others. For further information see Baboni et al (2003).

113
Johannes Kretz

as the development of a complex rule-based interactive system of creation turned out to


be useful.

2 Starting point
The basis of the whole piece is the dodecaphonic row shown in figure 1. It was composed
with particular attention to its potential of being segmented into chords of a certain
aesthetical quality. All the melodic and harmonic material of the piece is unfolded from
this row, even the rhythmical domain is - although more indirectly - influenced.

Figure 1. Dodecaphonic row

3 Simple unfolding of material


In order to obtain an organic extension of the row, in order to let it ”grow naturally” in the
domain of pitch (but in a way which avoids the triviality of simple copying/transposing
as well as a periodicity of groups of 12 notes), the following simple (but nevertheless
efficient) technique was used: For each ”repetition” of the row the list of intervals forming
the row was rotated in such a way, that the first interval of the precedent version became
the last interval in the current one (see figures 2 and 3). The continuous moving of
the first interval of the row to the last place for each subsequent instance produces a
permanent shifting between the ”true” beginning of the series of intervals (marked by
”i1”, ”i2”, ”i3” etc. in figure 4) and the apparent beginning of the series of notes (marked
by ”r?” in figure 4). Therefore after hearing the row in its original form it seems to have
disappeared, (because its ”head” - the most noticeable part, moved to the end), and then
it reappears growing each time by one note (but always shifted in position), until the
complete series appears again at the end (although transposed).

Figure 2. Beginning of the unfolding the row by rotation of intervals

114
Navigation of Structured Material in ”second horizon”

Figure 3. Patch for rotation of intervals

Figure 4. Complete unfolding of the row by rotation of intervals

4 Generating chords from melodies

Another ”simple” technique already used in many of the author’s compositions is the
”scrolling” through a melody (the dodecaphonic row in this case) for obtaining a ”family”
of chords with strong similarities. A certain number of subsequent notes from the row
(in this example 5) is always grouped into a chord. (In order to keep the number of notes
in constant for each chord, the row is looped.) Adjacent chords always have many notes
in common (in this example always 4 notes), which gives a strong coherence to the series
of chords and lets the listener perceive the original row ”through” the chords.

115
Johannes Kretz

Figure 5. Patch (inside the omloop-Object) for obtaining chords by ”scrolling” through a series
of pitches

Figure 6. Result of the patch of figure 5 applied to the row (figure 1)

5 Simulation of physical movements to shape struc-


tured musical material
To obtain interesting gestures, which have plausibility as movements of physical bodies, a
modelling of jumping rubber balls in a closed room, subject to gravity, wind, air friction,
reflection etc. was used. By defining a vector of position (x and y coordinates) and speed
(vx and vy ) for each ball and by updating this vectors in regular time intervals according
to the influences, which position, gravity, speed, wind etc. have on each other, a simple
but efficient model was developed. This was programmed in the Max3 environment in
order to have interactive real time access to these movements. Finally the position data
were recorded into text files for transfer into OpenMusic (see figures 7, 8 and 20). It
is evident that a simple translation of the position parameters of these shapes into the
domain of chromatic pitch would produce flat, rather unsatisfying musical results. While
they showed to be useful to shape (or influence) the surface of music, other tools were
needed to shape the inner, structural side of music in interaction with these curve (Kretz,

3 Originally developed at IRCAM, now www.cycling74.com

116
Navigation of Structured Material in ”second horizon”

2003).

Figure 7. Simulated movement of a rubber ball

Figure 8. Simulated movement of two rubber balls being attracted by each other

The pmc-engine, an OpenMusic library4 providing an environment for constraint pro-


gramming proved to be the ideal tool for this task. First a search space of allowed chords
had to be defined. In this case it was obtained by scrolling through the dodecaphonic row
as described above (figure 6). The search space was composed by all inversions, octave
permutations and transpositions of those chords. Additional constraints were applied as
rules: The range of the piano should not be exceeded. Two adjacent chords should have
not more the three common notes, but at least one.
The shape of the rubber ball movement was implemented as heuristic rule. This
means, that from all possibilities passing the (strict) rules the engine always chose a
solution as close as possible to the ”ideal” shape of the jumping rubber ball (figure 7).
Figure 9 gives an overview on the flow of information, figure 10 shows a possible result. It
can be clearly seen, that the shape is not represented perfectly, and that some of the inner
- structural - necessities results in compromises of the shape, especially the rule enforcing
common notes between to adjacent chords. But exactly this interaction between outer
and inner conditions of creation gives interesting, organic results.

4 Based on PWConstraints by Mikael Laurson (1995) ported to OpenMusic by Örjan Sandred (1999)

117
Johannes Kretz

Figures 12 and 13 show the use of this ”jumping rubber ball arppegio” in the final
score. The rhythm of the piano was generated again by the pmc-engine using Örjan
Sandred’s wrapper for rhythmical constraint programming OMRC (Sandred, 2003). The
pitches of figure 11 are grouped in various rhythmic cells of possible subdivisions of a
quaver: 1 rest + 3 notes, 2 rests + 2 notes, 0 rests + 4 notes, 1 rest + 4 notes, 1 rest +
5 notes, 0 rests + 5 notes, 0 rests + 6 notes, 1 rest + 4 notes. Additional rules defined
the favouring of subdivision changes from cell to cell and limited the complexity of the
result.

Figure 9. Overview

Figure 10. Arpeggiated chords in a shape close to the ball’s movement (beginning)

6 Harmony and counterpoint


The following example is from the first tutti section of ”second horizon”. Two layers of
six voices each were generated. The search space for the lower layer was obtained from
the first six notes of the dodecaphonic row (figure 14) by transposition, inversion, and

118
Navigation of Structured Material in ”second horizon”

Figure 11. Arpeggiated chords in a shape close to the ball’s movement

octave changes. Already in the process of generation of the search space (done with the
help of another instance of the pmc-engine) a certain amount of attention was paid to
the sounding quality of the chords: only those octave transpositions of the mother chord
were allowed, thus excluding undesired intervals (tritones or semitones in this example)
between adjacent pitches (see figure 15).
This chord material was then put together in a sort of choral by applying the following
rules: The lowest note of each chord should always come from the melody of figure 4
(the expansion of the dodecaphonic row by rotation of its intervals), but transposed
and constrained in a very low register (figure 16). Heuristic rules were used to express
the following ”wishes” or tendencies. Preference was given to adjacent chords with 2
common notes, followed by chords with 1 common note and (with much less weight) with
3 common notes. Also, when deciding which chord to choose, the one with the biggest
value for its smallest frequency difference between adjacent pitches was preferred. For
each ”candidate” the frequency differences were calculated by using x->dx-object, and
the minimum of this differences was used to decide, which chord would ”sound best”
(least dissonant). This rule is certainly inspired by spectral thinking, that is interpreting
pitches similar to the overtones of a spectrum. If the frequency difference between two
notes is too little, it is hard to perceive an interval. Usually this happens when playing a
small interval in the low register, which is perceived by most listeners as an unpleasant
or dirty sound.
The result can be seen in figure 17. Also the rhythmic design of this choral was
done with the help of the smallest frequency difference as criterion: more dissonant
chords were given a shorter duration, less dissonant ones a longer (figure 18). Finally a
”counter”-choral for the high register was created by combining those pitches not used
in the hexachords of the low choral in order to get a dodecaphonic balance. This was
done again with the help of the pmc-engine in such a way that certain intervals and
chords between adjacent chords (tritones, minor seconds but also major/minor triads)
were avoided and again the minimum of the frequency differences was as big as possible
(figure 19).

119
Johannes Kretz

Figure 12. From the score of ”second horizon”: application of figures 10 and 11

120
Navigation of Structured Material in ”second horizon”

Figure 13. From the score of ”second horizon” (continuation): application of figures 10 and 11

121
Johannes Kretz

Figure 14. Chord being the first hexachord of the row

Figure 15. Beginning of search space of chords derived from the chord in figure 14

Figure 16. Bass melody (compare figure 4 after the first 11 notes)

Figure 17. Chord series on given bass (figure 16) using a family of chords (from figures 14 and
15)

7 Composing by interactive models


The following example (figures 20-22) starts quite similarly to the previous ones. A series
of chords is ”guided” by the movement obtained from a physical modelling of a decaying
pendulum, but a set of numerous strict rules ensure that the chords are connected by
a certain inner logic: parallel chords are prohibited, the same chord can only be used
after at least three different ones, the frequency difference between adjacent pitches as
to be greater than 40 Hz. Heuristic rules include the preference of opposite or oblique
movement of voices to voices moving in the same direction and again verify the number
of common notes between adjacent chords.
Finally another (strict) rule was added for prohibiting repeated notes, repeated melodic

122
Navigation of Structured Material in ”second horizon”

Figure 18. Rhythmical realisation of figure 17 in six voices

Figure 19. Dodecaphonic complement of figure 18

patterns or sequences in the highest voice. The ”appearance” of the latter rule is char-
acteristic in ”composing through interactive models”. In contrast to the other rules, the
necessity for this one was not foreseen or planned by the composer. But listening to the
results generated (”suggested”) by the pmc-engine it was obvious that such a rule was
essential. This shows the interactivity of the creative process: The composer formalizes
his ideas into rules and wishes. The machine generates one or more solutions. Being
confronted with these, the composer becomes aware of aesthetic problems and refines the
rule system. The musical result is optimised through repeating this loop of interaction
between man and machine.

123
Johannes Kretz

Figure 20. Simulation of the movement of a decaying pendulum

Figure 21. Series of chords following the shape of figure 20 (beginning)

Figure 22. Chords of figure 21 with dodecaphonic complement in between

8 Computer aided orchestration


Figure 23 shows the result of an experimental system for computer aided orchestration. In
this example all the notes played by the piano (explained in the previous section, compare
figure 22) should be spread in a pointillistic manner among all the woodwind and brass
instruments (while the strings are playing a calmer layer of the four voice chords without
the dodecaphonic complement). Here the search space for the pcm-engine consisted not
of pitches or rhythms but of symbols representing the possible instruments. The search
space was set up in this way that for each note only those instruments were considered,
whose range made it possible to realize the corresponding note. The range for each
instrument was defined in table 1.
The following simple rules controlled the process of orchestration:

a) Woodwind and brass instruments should not be used simultaneously in a chord.


This was achieved by defining allowed sets of instruments and prohibiting a mix of
elements of several sets (see table 2).

124
Navigation of Structured Material in ”second horizon”

Figure 23. Computer Aided Orchestration: section from the score of ”second horizon”

125
Johannes Kretz

((d 5 n) (c 8 n)) picc


(((c 4 n) (c 7 n)) fl1
(((c 4 n) (c 7 n)) fl2
(((b 3 n) (f 6 n)) ob1
(((b 3 n) (f 6 n)) ob2
(((e 3 n) (a 5 n)) englh
(((d 3 n) (b 6 -)) clarB
(((b 2 -) (c 6 n)) bclar1
(((b 2 -) (c 6 n)) bclar2
(((b 2 -) (e 5 -)) fag1
(((b 2 -) (e 5 -)) fag2
(((c 1 n) (g 4 n)) cfag
(((b 2 -) (f 5 n)) hr1
(((b 2 -) (f 5 n)) hr2
(((h 1 n) (c 4 n)) hr3
(((h 1 n) (f 4 n)) hr4
(((a 3 n) (d 6 n)) tr1
(((e 3 n) (a 5 n)) tr2
(((e 3 n) (a 5 n)) flgh
(((f 2 n) (c 5 n)) pos1
(((e 2 n) (a 4 n)) pos2
(((d 1 n) (a 4 n)) bpos
(((a 1 n) (c 4 n)) tba

Table 1. Ranges of instruments

(setf allowed-sets
’((picc fl1 fl2 ob1 ob2 clarb englh bclar1 bclar2 fag1 fag2 cfag)
(tr1 tr2 flgh hr1 hr2 hr3 hr4 pos1 pos2 bpos tba)))

Table 2. Sets of instruments for computer aided orchestration

b) Within such a set the order of instruments has to correspond to the order of pitches.
This means for example that if flute, oboe and bassoon are playing a chord together,
the highest pitch has to be played by the flute, the middle one by the oboe and the
lowest by the bassoon, (which is not so far from the rules of traditional orchestra-
tion). In future this system could be expanded by additional, more sophisticated
rules, preferring favoured instrument combinations, making decisions on the basis
of information about the instruments formants or registers.

126
Navigation of Structured Material in ”second horizon”

9 Using constraint programming to improve musical


material
While the previous examples showed complex interaction of rules for filtering solutions
from a big search space, it sometimes turned out to be more efficient to compose by
stepwise material optimisation. The following example was inspired by the famous ”end-
less glissando” by Roger Shepard, (an acoustical illusion created through overtones going
down in parallel octaves with smooth fading). By repeating the ”scrolling through the
dodecaphonic row” and transposing each of the resulting chords one semitone lower than
its precedent, the strong coherence of the chords (originally sharing 4 of 5 pitches) allows
the listener to clearly perceive the continuos chromatic decline. It is obvious that this
processus cannot be continued for a long time without reaching a register, where the
pitches of the chords cannot be perceived clearly any more. Therefore all chords with
neighbouring pitches closer than 40 Hz were subjected to a recursive modification pro-
cess, iteratively transposing low pitches up an octave until the final result passed this
criterion (see figures 24-26).

Figure 24. Series of chords obtained by scrolling through the dodecaphonic row and transposing
the resulting chords each one semitone lower than the precedent

Figure 25. Improvement of the previous example (figure 24)

Figure 26. Improvement of the previous example (figure 25)

10 ”Spectral prediction” technique


The last section of ”second horizon” develops - to the author’s knowledge - completely new
approach of linking harmony and melody, presence and future, spectral and structural
thinking. The classical spectral school usually creates harmony by using the harmonic

127
Johannes Kretz

series (or parts of it) of a present fundamental pitch (figure 27 shows a melody with
the harmonics 3, 4 and 5 added to each fundamental). Differently or additionally to
this approach particularly interesting results were obtained by adding multiples of the
coming, future fundamental to the current bass note (see figure 28). In this way the
actual chord already carries some ”hidden” information about the coming fundamental,
which can be ”felt” by the listener. In this way the current and the following chord are
always syntactically linked allowing the listener to develop intuitive expectations about
the future of the series of chords, and so making him perceive the music as logical.
In particular, the simultaneous use of the classical spectral harmony and this ”spectral
prediction”-technique (i.e. playing the pitches in figures 28 and 29 together), gave very
satisfying results.

Figure 27. Melody with added harmonics 3, 4 and 5

Figure 28. Same melody like figure 27, but with ”spectral prediction” harmonies

Figure 29. Details of the ”spectral prediction” technique

128
Navigation of Structured Material in ”second horizon”

11 Conclusions
With the exception of two sections, the beginning (m. 1-10) and a sensitive, calm moment
in the centre of the work (m. 231-273), almost the whole piece was ”generated” with the
help of the OpenMusic environment and exported directly into score editing software.
At no stage of the creative process were paper and pencil needed. Certainly the material
from OpenMusic was complemented by the composer through additional - supplementary
and contrasting - musical elements and voices. But it is most especially the interactive
use of the ”constraints engine” that shows that musical material produced by software
can go well beyond mechanical sounding results.
One of the difficulties, which became evident during the work on this piece, was the
need for a more structured representation of musical data. Handling the parameters
independently, (like lists of midicent values for pitch, which are ignoring the existence
of rests, lists of fractions for durations, where rests are represented as negative numbers
etc.) made it rather difficult for parameters to interact. In future a rich, hierarchically
organized data structure for notes (a bundle of information containing many parameters
like pitch, duration, loudness, voice number, position in bar, expression marks, harmonic
context, etc.) should help to create even more ”intelligent” rules.

129
Bibliography
[1] Baboni Schilingi, J. and al. : PRISMA 01. Euresis Edizioni, Milano, 2003.

[2] Kretz, J. : Continuous Gestures of Structured Material. published in Baboni et al.


(2003), p.185ff.
[3] Sandred, Ö.: Searching for a Rhythmical Language. published in Baboni et al. (2003),
p.185ff.

Johannes Kretz
born 1968 in Vienna,
- studies of composition at the Vienna Music Academy
with F.BURT and M.JARRELL, studies of pedagogy
and mathematics at the University Vienna,
- 1992/93: studies at IRCAM, Paris, development of a
software-environement (”KLANGPILOT”) for sound
synthesis - 1994: co-founder of the NewTonEnsemble
specialised on electronic and instrumental music,
- 1994-2003: teacher of computer music at the Inter-
national Bartok Seminar in Szombathely (Hungary),
- numerous lectures on computer music in Austria and
Germany, presentations at international conferences, various workshops at the Mu-
sic Academies in Stuttgart and Hamburg and at the National University for Arts in
Seoul (Korea),
. - 1996-2001: teacher of music theory and composition at the Vienna Conservatory,
- since 1997: teacher of computer music at the Academy in Vienna, since 2001 also
music theory, since 2004 assistant in the composition class of M. JARRELL,
- since 2001: board member of several Austrian composers unions (ISCM, ÖKB and
ÖGZM),
- member of the international composers group ”PRISMA” (centro tempo reale, Flo-
rence/Italy) on computer music and musical research - numerous performances in
Austria, Germany, France, Poland, Hungary, Argentinia, Turkey and Korea,
- numerous broadcasts in Austrian and German radio,
- commissions of works from Konzerthaus Wien, Klangforum Wien, Ensemble On
Line, Vienna Flautists, quartett22, Internationale Lemgoer Orgeltage, Haller Bach-
tage, Triton Trombone Quartett, Wiener Kammerchor,
- numerous grants,
- prize at the competition for orchestral composition ”Stiftung Christoph Delz” (CH,
2002), Körner prize Austria 2004, composition prize for an orchestra work from the
Austrian government 2004.

131
When the Computer Enables
Freedom from the Machine (On an
Outline of the Work
Hérédo-Ribotes)1
- Fabien Lévy -

Abstract. In some cases, when the musical process is sufficiently verbalised and
formalized, the computer allows the composer to concentrate on the music and to spend
less time on the calculation. I will present a very simple example of a technique used
in my orchestral work Hérédo-Ribotes for viola solo and fifty-one orchestra musicians,
and illustrate the aesthetic ideas underlying this piece. I will also show different cases in
which the computer was a good musical aid in creating this esthetical meaning, and other
cases in which the work, on the contrary, was better done by hand.
***
As far as I’m concerned, it is extremely difficult for a composer to limit a musical
intention to formalised and verbalised procedures: aesthetic effect is more valuable than
technical skill. Apart from the fact that an intention cannot always be rendered explicit, a
technique’s effect is interesting only if one reaches beyond, so to speak, the kitchen utensils
in order to engage with the chef’s aesthetic, and even with his political obsessions. With
this in mind, I will present my personal use of the OpenMusic software, guided by two
considerations. The first is epistemological, and I will show that rather than restraining
the composer to making purely technical considerations, the use of computers in musical
composition enables him/her to free him-/herself from such limitations. The second
consideration is aesthetic, and I will attempt to describe succinctly the artistic reflections
that motivate my computer-based procedures presented here.

1 A generation of paradox
The composers who, like me, were born around 1968 are neither members of the gener-
ation who were in their twenties at the end of the Second World War (Boulez, Ligeti,
Stockhausen, Xenakis, et al) nor of the generation for whom the events of 1968 were
defining moments of their adolescence (Ferneyhough, Grisey, et al). This accounts for
one of the traits of my generation: that we are generally detached from the attitudes of
refusal of the past that marked the former generation or of systems that distinguished
the latter. Many of us, however, have inherited from our ”musical grandparents” the

1 Thanks to Christian Pinawin for the help in the translation into English.

133
Fabien Lévy

desire to construct new grammars and share with our ”parents” the dream of developing
new concepts that go beyond the sign and beyond analytic thought2 . But in this so-
called ”postmodern” era, some of us are wary of notions of stylistic progress, preferring
instead notions of sincerity and originality, just as we are wary of the idea of universal
perception, favouring instead the idea of cultural listening conventions being interrogated
and deconstructed. Personally, I prefer paradox to forms and processes that are overly
demonstrative. In fact, on the level of technique alone, I was exposed to computers at
an early age3 . However, perhaps because I became familiar with computers at an early
age, I vigorously strive to demystify the contribution of technology to the creative act,
composing everything by hand, and mentally.

2 Transparametric techniques
One of the recurring questions in my work deals with the closure of analytic thinking and
perception4 . From 1996 onwards, I began to think hard about the notion of transpara-
metric musical inflexion – research into the smallest musical tensions, where the action
would be detected by the senses but not be intelligible or analysable, and in particu-
lar, not reducible to a transformation according to any of the traditional parameters
of Western music – duration, pitch and intensity. The challenge was to musically con-
struct moments of presque rien and je ne sais quoi, to use Jankelevich’s expressions. I
borrowed, for my instrumental music, an acoustic principle which was associated with
analogue samplers, bringing together in the same relation the frequency of transposition
and the pulsational speed of a sound. This technique was already used by Karlheinz
Stockhausen (Kontakte 1960; see also his article Wie die Zeit vergeht, 1957), but with a
different objective – so that the serial differentiation of pitches reflects that of rhythms.
In my work, this principle allows, on the contrary, the concealment of the parametric
origin of a phenomenon: an infinitesimal variation in pitch, for example the transposi-
tion of a quarter-tone of a musical element, would be systematically accompanied by a
change of the same order to the other parameters, particularly to the speed of rhythmic
repetition of this element (figure 1). The accumulation of these tiny inflexions applied to
all parameters in similar ratios would create a tension without one being able to perceive
its objective cause.
It should be made clear that in order to develop this technique5 and to assess its

2 Analytic thought is characterised in Western music in particular by a separation of musical pa-

rameters in rhythm, pitch, and duration, by a reduction of complex, continuous phenomena to finite,
discontinuous alphabets, by functional thought, and consequently by considering the sign in a combina-
torial way.
3 When I was eleven, I was playing on a elementary ZX80 Sinclair computer; a few years later I

performed my first experiments in computer music with the Yamaha CX5M musical computer and in
programming (BASIC, and then PASCAL). I pursued advanced coursework in the sciences at the same
time as my studies in music.
4 Perception and writing are closely linked, and to work on the closure of one has implications for the

deconstruction of the other. We think according to how we perceive and represent the world; likewise, we
perceive according to how we think, for example by focusing in the West on the fundamental – or tone –
of a complex sound, by separating the parameters of listening, by instinctively hearing tonal functions,
etc.
5 The technique itself remains somewhat naive; but seen from a strictly algorithmic point of view, so

too do the techniques of derivation of dodecaphonic series, the production of spectra or the canon.

134
When the Computer Enables Freedom from the Machine...

Figure 1. Technique of transparametric musical inflexion: extract from Les deux ampoules d’un
sablier peu à peu se comprennent (1996), for amplified solo harp (Billaudot publisher)

aesthetic consequences, it was initially necessary to work on it manually, determining


what it put at stake as well as its musical consequences due to the painstaking slowness
of transcribing first onto graph paper and then to music manuscript paper. It was only
when this technique was properly verbalised and formalised, when its aesthetic stakes
were defined, and when my research began to shift focus to other techniques, making it
necessary that no more time should be spent on calculations of transparametric pulsations
that I decided to program the technique in OpenMusic.

Figure 2. Formalisation of the technique of transparametric musical inflexion: patch pro-


grammed on OpenMusic software (at left) and example of a result (at right, sketch from Hérédo-
Ribotes, for solo viola and 51 orchestral musicians, mes. 210 )

3 Passage to the machine


The formalization of such a technique on OpenMusic raises numerous interesting issues.
In the pieces that followed Les deux ampoules d’un sablier peu à peu se comprennent,
I focused more upon the ”styling” of the cloud of points obtained by the technique of
transparametric inflexions. The task was to give form and colour to each of the tiles
of the mosaic, so that one’s perception would be lost between totality and detail. This
reflection on confusion is a prior step in the deconstruction of analytic thought in music.
So I needed a simple, effective tool that allowed me to create the cloud of points and
their different inflexions and to hear them immediately, like a composer searching for and
selecting chords at the piano. OpenMusic, with its musical interface, its capacity to hear

135
Fabien Lévy

immediately in quarter-tones, its double representation of music – metric (in the form of
voice) and proportional (in the form of chordseq) was the ideal tool.
The other noteworthy property of OpenMusic is its structure in patches of patches, in
abstraction of abstractions, very close to a composer’s thought modes: once a technique is
established, the composer reuses it often in slightly modified configurations or integrates
these configurations into more extensive techniques. The capacity to transform a patch
into a functional abstraction, that is, to transform certain data into variables in Open-
Music, and the possibility to insert a patch into another encourages this compositional
practice.

Figure 3. Combination and abstraction of Patches in OpenMusic: sketches for Hérédo-Ribotes,


for solo viola and 51 orchestra musicians (mes. 169 à 219)

4 On the utility of manual labour


Once the cloud of points has been calculated by the computer, one must evidently render
it musical – in particular, one must style it, orchestrate it, give it rhythm, and – should
the occasion arise – make local modifications. At this stage, Kant, an OpenMusic’s li-
brary that permits the quantification of a chord sequence (such as the one illustrated
in Figure 2) in rhythmic notation and traditional metric, would have proved effective.
However, I prefer to perform this transcription/quantification by hand, despite the slow-
ness of the work, printing the chord sequence onto graph paper and then copying it again
slowly. In fact, it is this manual operation that permits me to understand, to control – in

136
When the Computer Enables Freedom from the Machine...

short, to listen to this primary material with my inner ear, in order to render it musical
subsequently.
This technical presentation of a use of OpenMusic will appear naive to some. The
applications of this software can be more complex, in my personal use (calculation by
OpenMusic of ”first spectra”, calculation of ”iterated harmonic transpositions”, represen-
tation by ”vectors of Tanner ”, software Pareto)6 . The issue here, however, is to outline
the stages that proceed from the elaboration of a technique that arose from aesthetic
preoccupations up to the point of its formalisation and ”normal” - almost normative -
application. An additional concern here is to show that in a simple case, even when the
formalisation of the patch is easy and makes the work less arduous, it is more important
to make use of the slower pace of pen and ink, and manual calculation.
OpenMusic is an extraordinary tool for symbolic calculation of the musical thanks
to its interface as well as the unmatched freedom it affords the composer from certain
constraints imposed by calculations, allowing him to concentrate better on musical qual-
ity. By way of conclusion however, let it be clear that OpenMusic remains a tool for
calculation of classical musical categories (pitch, duration, dynamic, parameters midi)
that arise from categories of western musical writing established by Boethius, Hucbald
de Saint-Amand, Guido d’Arezzo and Franco of Cologne between the sixth and thirteenth
centuries. With the appearance of recording technologies at the beginning of the twenti-
eth century and of its deployment as archi-writing, to use Derrida’s expression, from the
50’s on by composers of musique concrËte and of electro-acoustic music, complex sound
has become an essential category of recent music. Today, numerical means of increasing
capability open the way to a symbolic approach to thinking about complex sounds, which
still lack means of clear and ergonomic semiotic representation for composition (Lévy,
2002). One can hope, nevertheless, that in the near future a tool for ”the composition of
complex sound” will appear that brings together the possibilities of sound transformation
software such as Protools, Max, or Logic Audio and the capacities for high-level symbolic
calculation such as OpenMusic.

6 Pareto – [Patch d’Analyse et de Resynthèse des Echelles dans les musiques de Tradition Orale]

(Patch of Analysis and Resynthesis of Scales in Oral Tradition Music) is software programmed on Open-
Music that aids in the determination of pitch scales in the musical repertoires of oral traditions. Once
the chart of virtual fundamentals from a sound file has been extracted by software such as Diphone,
Audiosculpt, etc. and imported into OpenMusic in the form of a text file, Pareto evaluates the average
scale of the motif with statistical tools of averaging and time smoothing. Then, in order to confirm that
the average scale thus determined makes sense to the musicians, it is possible in Pareto to transform
the initial sound file by microtransposition via SVP according to precise hypotheses about the scales
(hypotheses of different scales, a hypothesis calculated to confirm by auditory consent, ”placebo” scales,
etc), in order to offer it to the musicians again. Pareto was used for the first time in July 2000 in
Cameroon, in the course of an ethnomusicological mission with the Bedzan pygmies. More information
on http://membres.lycos.fr/fabienlevy/Pareto.html

137
Fabien Lévy

31

209 ,
¤ ¤ ¤ Ó Œ ¤
(son à l'envers)

Fl. 1 & œ
F
-œ sec œ. œ. œ. œ. œ. œ. - -œ
& BŒ ‡ Ó ‡ ‡( B) Œ Ó Ó ‡ ∆ (B æ)œJ .
sec
(B )
Œ ‡. Ó ¤
3 flatt.
(B )
æJ æR
p
Fl 2
p P 3 b ¯œ P .̄ b œ.̄
¯con cl. con alto b œ
b œ.̄ œ.̄
3

¤ Ó Œ ‡ ¤ Œ ∆ b œ œ̄ ∆ Œ ‡. R R ‡. Œ Œ ‡
Hb. 1 & œ
F f F
œ. œ. œ. œ.secœ. ~ œ. œ. œ. œ. F
&BŒ ∆ ( Bæ)œJ . œ œ œ
3 sec

‡ Ó Œ Œ ‡ Ó ‡ Œ ‡ (B ) Œ Ó Ó Œ
con cl. flatt.
(B ) (B )

sec flatt.
(B )
flatt.
( B)
æJ æJ æJ
3
Hb 2
p b¯œ œ̄ P b œ.̄ œ.̄ F b œ.̄ œ.̄
P P ¯
3 3 3

R ‡.
con trp.

& ¤ ¤ ‡ Ó Œ ∆ b œ œ̄ ∆ ‡ Œ Ó Œ ‡
Pœb
Cl. 1
f F F
. sec. .
(B )œ œ œ ( B) œ
œ œ
& BŒ j ‡ Œ ‡ Œ ∆ æj Œ ‡ ‡ Œ j ‡ Œ Ó Œ ¤
flatt. con hb. flatt.
(son à l'envers)
( B) ( B)
flatt.

æ æJ
flatt.

Ͼ Ͼ
3

( B)œ .
p- p P
Cl. 2
( B)
p P
( B)
p
œ œ< -
Bœ œ , # œ.
3

B Ó ‡ # œB œ Œ
j5
(son à l'envers)

Œ ‡ B œ
J ∆ Ó Ó ‡ œ #œ Œ Ó Œ Ó ∆ R ‡ Œ
p -œ
bsn 1

p P p
t Œ Ó
prendre basson
Œ Ó ¤
3
CtrBn.

⁄w œ ⁄ w œ
,
& Ó Œ ‡ œj ∆ ¤ µœ . ‡ Ó ¤ Ó ∆ # œr Œ
5 sec
sec (son à l'envers)

P +.
Cor 1

+̄ p
j j r ‡. r
& Œ ‡ ‡ Ó ‡. œ Œ Ó œ Œ ¤ ¤
sec sec

µœ œ œ
(son à l'envers)

o p P.+ +.
Cor 2

o (son à l'envers)
& Ó Œ ‡ œj ∆ Ó ‡. r
œ. Œ Ó Œ ∆ µœj . r
œ ‡. Ó Ó ∆ # œr ‡ Œ
5 sec

œ
sec sec

.+
cor 3

+̄ P+ p P
oj(son à l'envers) o j (son à l'envers)
¤ Œ Ó Ó ‡. r
œ Œ
sec
¤ Œ Œ ‡
con hpe

& µœ œ œ œ.
3 3

+.
cor 4

p p
b ¯œ œ̄
Tpt. 1 & ¤ ¤ ¤ Œ ∆ ∆ Ó ¤
F
B -œ œ
¤ ¤ ¤ ¤ ‡ ‡ Ó
con Sourd. sèche (straight) flatt. senza Sourd.
Tpt. 2 & æJ æJ
F
.
œsec .
# œsec
Trbn 1
B ¤ ¤ Ó ‡. R Œ ¤ Ó
con Sourd.
∆ R ‡ Œ ?
p P
<œ5 œ.sec
sec

B Ó
senza Sourd.

Œ ‡ J ∆ Ó ‡. R Œ ¤ ? Ó ‡ ‡ ¤
con Sourd.

p
Trbn 2

p ßf œ> p
?
j ‡ Œ ‡ j j ‡ Œ ‡ j j ‡ Œ ‡ ¤
Ωœ ßf œ> .
Tuba

œ √ ˘ œ œ ˘ œ p
Ω̆
(√) √
(¤# w ) bw #w bw #w bw #˘ b˘ , b ww
æ æ æ Ó æ
& æ
µ b œœ
3

? Ó r ‡ # œj J
Hpe

‡. #œ Œ ¤ Œ Ó Ó ‡ Œ Œ Œ Ó
con vcl. con vcl.

j
Ωo Ωo Fœœ p

? Ó ‡ j
æ æ j ‡ Œ Œ ‡
æj j ‡ Ó Œ ¤
œæ ˘æ. œæ æ
Timb

⁄œ œ ⁄œ Ωœ F Ω
? j
¤ ¤ ¤ Ó ‡ œ Œ ¤
F>
Perc. I

÷ ¤ ¤ ¤ Ó Œ ¤
•æ
Perc. II

¯ ∑
b ¯œ b œ.̄ œ.̄ b ·œ b ¯œ
au talon

œ̄
J Œ µ œ. œ. œ. Bœ R R ‡.
& ‡ Œ ¤ Ó ‡ Œ Ó Œ ‡. Œ Ó
con hb.

P Bœ
Alto solo

3
F ƒ
3

209
(¤ wo ) ŏ . ŏ œo wo ™w w
æ
gliss.

& Œ Œ
p
vn I 1-2

(¤ ) œo ŏ œo b¯œ ¯œ ™wo œo ŏ . pœo


J J J J
& ‡ ‡ Œ ‡. bœ œ ‡. Œ ‡ ‡ Œ Ó
Pb¯Rœ ¯Rœ
vn I 3-4

(¤ wo ) p ¤ œo wo ™w w
æ
gliss.

Œ ‡. bœ œ ‡. Œ Ó Œ
senza Sourd.

&
Pb¯Rœ ¯Rœ
vn I. 5-6

(¤ ) œo ŏ œo ™wo œo ŏ . pœo
J J J J
& ‡ ‡ Œ ‡. bœ œ ‡. Œ ‡ ‡ Œ Ó
senza Sourd.

(¤wo ) Pb¯Rœ ¯Rœ


vn I. 7-8

p ¤ œo wo
& Œ ‡. bœ œ ‡. Œ Ó Œ ¤
PR R
vn I. 9-10

(¤ ŏ . ) œo ¤ # ˘o ŏ œo # œo œo ŏ . ™
b ¯sec
œ #w w
3

‡ ‡. R Œ ‡ Œ
& æ
gliss.

¤ p# ˘o
vn II. 1-2

(¤ ŏ ) F b ¯œsec ŏ œo p
b ¯œ b¯œ #œ b ¯œ ¯œ
® µœ B œ B œ œ œ ® Ó
9:8

R ΠJ
3

‡. bœ ‡ .
3

Ó Œ ‡. bœ Œ Œ ‡ bœ œ Œ
senza Sourd.
& œ
¤ p# ˘o P¯R R
vn II. 3-4

(¤ ŏ . ) œo F b ¯œsec ŏ œo bœ b¯œ ⁄ Pb ¯œ ¯œ
B œ #œ
3

œ œ
9:8

R Œ J ® µœ B œ
3

& ‡ ‡. Œ ‡. b œ bœ ‡ . Œ Œ œ ® Ó ‡ bœ œ Œ
senza Sourd.

p P¯R R
vn II. 5-6

(¤ ŏ ) F ¯ ⁄ P¯ ¯œ
bœ bœ B œ #œ
9:8

œ œ ® Ó bœ
Ó ¤ Ó Œ ‡. b œ bœ ‡ . Œ Œ ® µœ œ ‡ bœ œ Œ
senza Sourd.

& B
œ
PR R
vn II. 7-8

¯ ⁄ P
b <œ ¯ b¯œ œ Bœ œ bœ œ̄
Bœ b¯œ ¯œ
J ‡ œ œ µœ b œ æ æ æ œ bœ œ œ
con Sourd.
Bœ œ.
& Ó Œ Ó ‡ æ
œæ b œ œ. æ æ æR æR ∆ Œ Œ ∆ bœ œ ∆ Ó
senza Sourd.
B
æ æ æ æ ¯ &
Ωœ . æ œ œb œ̄ ⁄ œ bœ Bœ œ
Va 1

F p ¯ œ Bœ f Fb¯œ ¯œ
œ µ œ bæœR b¯œ œ œ
B ¤ Ó ‡ b œæ Bœ œ. œ æR æ æ æ ∆ Œ œ
& Œ ∆ bœ œ ∆ Ó
con Sourd.
œ.
œæ æ æ
senza Sourd.

æ æ æ æ ¯
Ωœ . æ œ fb œ œ̄
7:8

⁄œ œ
Va 2

b œ< p ¯ œ Bœ Fb¯œ ¯œ
œ µ œ bæRœ b¯œ Bœ
œ b œ 7:8 œ œ
& Ó
J ‡ Œ Ó ‡ æ Bœ œ. œ æR æ æ æ ∆ Œ & Œ ∆ bœ œ ∆ Ó
œæ b œ
œ. æ æ
con Sourd. senza Sourd.
B
æ æ æ æ ¯
Ωœ . æ œ œbœ̄ œ œ
Va 3

F p ¯œ œ Bœ f ⁄ Fb¯œ ¯œ
œ œ œ b b¯œ æ æ æ

œ b œ 7:8 œ œ
Bœ œ.
B ¤ Ó ‡ b œæ æ æ æR æR ∆ Œ & Œ ∆ bœ œ ∆ Ó
µ
con Sourd.

œ.
senza Sourd.

æ
Ωœ . œæ æ æ æ æ
⁄œ œ
Va 4

p f F
æ æ b œæ7:8B œæ œæ æ
6

#œ œ . ,
‡ #æœJ
? Ó
A la pointe S.P.
œ B œS.P.
‡. œ œ
S.P.

æR æJ ∆ ¤ æJ ‡ Ó Ó Œ
con Sourd.

æ
A la pointe

p æ̆
vcl 1-2

Ω ⁄ œ PSul Tasto Ω p
5

, Bœ œ
? ¤ Ó ‡ œ. w œ œ ‡. Œ œ bœ w
con Sourd. Sul Tasto

R
Vcl 3-4

Ω P F ⁄ œ æ æ 6B æ æ æ Ω
æ bœ œ œ B œS.P. ,
? ¤ ¤ ¤ Ó Œ œ œ
con Sourd.
æ
A la pointe

æ̆
Vlc. 5-6

⁄œ P Ω p
t ·
Œ Ó j ‡ Ó ‡
œæ ˘æ. œæ .
Cb 1

Ω Ωæw ß > œ
p Ω
w

Chaque note, accompagnée de sa morphologie propre, ([si b sec], [fa 1/4 de b 3 notes répétées], [ré 1/4 de b flat], [sol 1/4 de # son à l''env.], etc..) doivent s'entendre comme autant de voix indépendantes et
homogènes d'un contrepoint de sonorités.

Figure 4. Hérédo-Ribotes, for solo viola andG 712251B orchestra musicians, realisation of cloud of
pulsated points (mes. 209 à 213, Billaudot publisher)

138
Fig 4. Hérédo-Ribotes, pour alto solo et 51 musiciens d’orchestre, réalisation des nuages pulsés (mes. 209
à 213, Éditions Billaudot).
Bibliography
[1] Lévy, Fabien : L’écriture musicale à l’ère du numérique. Culture & recherche n◦ 91-
92, Musique et son: les enjeux de l’ère numérique, Ministère de la Culture, mission
de la recherche, July 2002.
[2] Lévy, Fabien : complexité grammatologique et complexité aperceptive en musique.
Etude esthétique et scientifique du décalage entre la pensée de l’écriture et la percep-
tion cognitive des processus musicaux sous l’angle des théories de l’information et
de la complexité. Ph.D in music and musicology of XXth Century, Ecole des Hautes
Etudes en Sciences Sociales, Paris, February 2004.
[3] Stockhausen, Karlheinz : Wie die Zeit vergeht . in die Reihe, T.III, Herbert Eimert
éd., Vienne, 1957.

Fabien Lévy
Fabien Lévy was born in December 1968 in Paris (France).
He studied composition with Gérard Grisey at the Con-
servatoire National Superieur de Musique in Paris. Parallel
to his compositional activities, he was awarded a Ph.D.
for his work on the complexity of musical processes at
EHESS and CNRS and has written numerous theoretical
articles. He was resident in 2001 in Berlin for the DAAD
Berliner-K¸nstlerprogram, and in 2002 in the Villa Medicis
/ French academy in Rome. His orchestral work Hérédo-
Ribotes was nominated at the International Rostrum of
Composers (2002) and in 2004 he won the Foundation Ernst
von Siemens Förderpreis for Music. His works for soloists, chamber music, ensemble,
orchestra (used as a large ensemble of soloists) or computer, have been performed
across Europe and the United States. His instrumental works are published by Bil-
laudot. He is currently living in Berlin (Germany), where he teaches orchestration
to composition students at the Hanns-Eisler Hochschule für Musik.
Software PARETO downloadable on:
http://membres.lycos.fr/fabienlevy/Pareto.html

139
Some Applications of OpenMusic in
Connection with the Program
Modalys
- Paola Livorsi -

Abstract. The basic idea behind Os for male voice and electronics (realized at Ircam
in 2001-2002) is the production of resonances: partly produced by vocal soundfiles exciting
Modalys objects (e.g. simulation or basic models of percussion instruments or their parts),
partly by other soundfiles. The vocal soundfiles were recorded with Nicholas Isherwood
and Armelle Orieux. They comprise both sung examples and readings of excerpts from
Saint-John Perse’s poem Vents1 , on which the piece is based.
***

1 Introduction
OpenMusic2 has been important throughout the whole work, for preparing part of the
material, taken from vocal soundfiles (from Ohnfad for female voice and electronics3 ),
and analysed with AudioSculpt; for collating and examining data, and for calculating
interpolations between harmonics. Its usefulness lay mostly in amplifying and refining
the possibilities of the processing afforded by Modalys 1.8.1.
The piece makes use of both realtime interaction and prepared soundfiles, in both
cases triggered by the performer, via a Midi device.
After to a short introduction, the piece is made of various stages of voice-Modalys
object interactive events, from simple ones (played by Max/MSP, through the object
resonators∼) to more complex, synthezised sounds. In this case the OpenMusic library
OM Modalys 1.0 plays a central role4 . At the same time, the material used to build the
objects evolve from dry (e.g. cork) to resonant ones (various kinds of metal, such as iron,
bronze and silver); the object types change accordingly, starting from simple strings and
moving up to rectangular and circular plates.
The two types are then blended into hybrids, morphing into each other, or combining
their characteristics to form a ”super” dual-material object (e.g. a circular plate with a
spectrum with the combined characteristics of more than one material). In this case the
two objects used to build the hybrid needed to be of the same type (e.g. two strings or
plates, and so on).

1 Perse,S.-J. (1960), Vents, Gallimard, Paris. Extracts: II, 3, pp.15-17; I, 6, p.22.


2 The version referred to here is OM 4.0.
3 Realized in 2000 at CCMIX, Centre de Création Musicale Iannis Xenakis, Paris.
4 I gratefully acknowledge the work of Mauro Lanza, who created this feature and familiarized me

with it.

141
Paola Livorsi

Another progression is carried out during the concert diffusion: seven speakers are
used (three speaker pairs around the room plus one on the ceiling), a front stereo pair
(for the voice onstage), moving through various configurations of the seven channels; this
especially concerns the realtime part, where the gestures are more obvious.

2 Real-time resonances
A Modalys object can be used in realtime interactions if reduced to a single point, that
is, simplified so as to be one-dimensional; this makes it possible to preserve the physical
characteristics of the material, while at the same time reducing the amount of data
needed. The data is collected in a Max coll, and eventually played by resonators∼.
For example, in the first part of Os, at event no. 5, an iron clamped-circular-plate
is struck by the onstage voice. The Modalys script defines the material and the pitch,
and then saves the object as a single point. Next step the coll is extracted, with the
library OM Modalys 1.0 function for-msp-resonators (see figure 1), which makes it
possible to choose the resonance source in the object: an object that contains 200 modes
for example, in order to generate a reasonably ”harmonic” sound, requires a value around
100 (i.e. a centre value. As always in such cases, only experimentation and listening will
yield the right settings. In the present case, a value of 45 gave an interesting result).

Figure 1. Passage from the single-point object to the coll file for Max/MSP with the library
OM Modalys 1.0

The file thus obtained has three orders of values (frequences, bandwidth and ampli-
tudes), as well as a mass of data that can be easily filtered and converted into the right
format using OpenMusic, ready to be injected into a coll.
Here is an example from a Modalys script (event no.5) obtained with the save-object
function:

142
Some Applications of OpenMusic in Connection with the Program Modalys

@ geometry info: clamped circ plate r points = 10 phi points = 20 @


Modal data for Modalys object:
number of modes: 200
number of points: 200
freq0 Hz: 49.00000000 abs0 1/s: 1.20216090 0.14514700
freq1 Hz: 101.97498122 abs1 1/s: 1.20935901 0.13701389
freq2 Hz: 101.97498122 abs2 1/s: 1.20935901 0.12409865
freq3 Hz: 167.28689844 abs3 1/s: 1.22518642 0.10731958
freq4 Hz: 167.28689844 abs4 1/s: 1.22518642 0.08788252
freq5 Hz: 190.76130153 abs5 1/s: 1.23275089 0.06720917
freq6 Hz: 244.76442865 abs6 1/s: 1.25391866 0.04685580
freq7 Hz: 244.76442865 abs7 1/s: 1.25391866 0.02842945
freq8 Hz: 291.76326374 abs8 1/s: 1.27661322 0.01350962
freq9 Hz: 291.76326374 abs9 1/s: 1.27661322 0.00358314

This is an example using the same file from the Modalys library:

1, 49.0 0.06720917 1.2021609;


2, 101.97498122 -0.01291711 1.20935901;
3, 101.97498122 0.12874027 1.20935901;
4, 167.28689844 -0.13967415 1.22518642;
5, 167.28689844 -0.01401416 1.22518642;
6, 190.76130153 -0.07748765 1.23275089;
7, 244.76442865 0.01363864 1.25391866;
8, 244.76442865 -0.13593152 1.25391866;
9, 291.76326374 0.00513917 1.27661322;
10, 291.76326374 -0.05122025 1.27661322;

Figure 2. Filtering data with OpenMusic

143
Paola Livorsi

This text may yet be filtered by OpenMusic, to reduce the number of decimals to a
value below 6, to allow Max/MSP to properly process the data flow (see figure 2).
Event no. 7, figure 3, makes use of string resonances in realtime, as well as no. 1 (cork
string)5 , this time combining iron, silver and bronze materials, in a richer multi-channel
configuration.

Figure 3. Os for male voice and electronics (2001-02), real-time events no.7-11

The two more significant realtime events are no. 7-11, figure 3 and 16-21. In both cases
the text refers to ”forces” (e.g. wind, but also to more abstract energies) to ”flourishing”
(prospérer ) and to ”spreading” (propager ), culminating at event no. 23 (toute colère de
la pierre et toute querelle de la flamme); in this section realtime resonances are combined
with more complex soundfiles, which eventually dominate the other elements.
The resonances are still rather light and simple in event no. 7 and following, but they
become richer and richer until no. 21, where a dual-material object is used in realtime.
In this case, after saving the two objects (as seen above) they can be imported into
an OpenMusic patch, via the Modalys library, and their characteristics can be combined;
the resulting object is richer and denser, in this case combining frequencies from a bronze
clamped-circular-plate with bandwidths and amplitudes from an aluminium one.
The last section of the piece uses realtime objects as well, this time rectangular and
circular plates made out of skin, making deeper, wider resonances possible, in addition

5 Numbers in rhombus refer to real time events, in circle to reverbs, in square to soundfiles. The score

is available on request at the Finnish Music Information Center (www.fimic.fi).

144
Some Applications of OpenMusic in Connection with the Program Modalys

to a new, warmer timbre. The text refers to ”scattering” (disperser ) and ”lacerating”
(lacérer ): this extract from Vents is essentially about opposing forces leading to final
destruction; but it also tells of a release of energies, not necessarily negative ones, capable
of bringing about the birth of something new, by preparing a new space.
In order to obtain good results this technique requires a fair amount of processor
power; the maximum number of modes, to balance out quality sound and processor
cycles, is around 300, less if a large number of colls are used in rapid succession.

3 Working soundfiles with the Modalys library


The Modalys library, now has a new version6 making it possible to efficiently connect the
two programs, widening and refining Modalys’ possibilities while at the same time making
it easier to use. Combining vocal soundfiles and objects is simple, the objects being the
same as those used in realtime; next, a second, female voice is used in conjunction with
the male one, for example two soundfiles may be used simultaneously to excite an object
(event no.15, figure 4).

Figure 4. Events 15-18

Again, the simplest interaction consists of exciting an object by voice (in most cases
a mono or stereo soundfile): but in this case Modalys and OM give greater control over,

6 The new library version makes this procedure much simpler

145
Paola Livorsi

as well as differentiation of, way the sound evolves. Typically a richer sound is obtained
by combining two instrumental types, using a hybrid (not yet possible in realtime).
OM enables the user to control parameters more efficiently than by hand, in Scheme7 :
e.g. the access envelopes which in Modalys are the point(s) of contact between player
and instrument (simulation of a plectrum, etc), and the microphone position point(s).
Other parameters that may be controlled by OM are synthesis time (e.g. duration of
resonance), name and length of the soundfile, and so on. Another important advantage in
controlling access with OM is the possibility of maintaining their movement throughout
the sound, for example using a random envelope, which gives a richer result.
These examples generally use mono soundfiles, but stereo or even multichannel sound-
files can be exploited by increasing the number of accesses and other parameters.
First the Modalys file must be prepared: the object is created, then the parameters
intended for processing by OM are exported with the function export 2OM; this makes
it possible for OM to load the file (through the Modalys Library function get-mos-
parameters). In this way a new class is created, which can be dragged from the folder
User (in Packages) and displayed in the OM patch with all its characteristics. It can be
sent via inputs and outputs. This is the basic preparation required for any Modalys file
data that is to be processed in OM.
Here is the procedure extraction from the preliminary Modalys file, using the ex-
port2OM function:

(export2OM
’((access-env env 2 ”access-env”)
(listen-env0 env 2 ”listen-env0”)
(listen-env1 env 2 ”listen-env1”)
(sfname string nil ”name of sf”)
(sflength number nil ”length sf”)
(synth-time number nil ”time of resonance”)))

Extract from the final Modalys file, with a part of the access random envelope:
(define access-env
(make-controller ’envelope 2 (list
(list 0 0.2896651022797322 0.0908634551271315)
(list 0.2915204383796462 0.31293824977942974 0.09440941172674323)
(list 0.3148427421782962 0.3434790739590433 0.14632212124212246)
(list 0.6271550746711997 0.3287648616816715 0.197080263919739)
(list 0.7129107505286313 0.31194877272057675 0.19123511318479822)
(list 1.1984812810467207 0.21228409317901742 0.16405085594348906)
(list 1.432428384601005 0.28785242137253353 0.13269917529711484)
(list 1.6964336544301988 0.25413045827291614 0.13143346991789379)
(list 2.0600987094661196 0.32732066143890587 0.14597496441879437)
In this example both the play function and the object access are controlled by a
random envelope, created by the OM function brownian1 (simulation of the brownian
motion, from the library OMAlea, see figure 6; the random envelope boundaries are

7 The programming language used in Modalys.

146
Some Applications of OpenMusic in Connection with the Program Modalys

written so to approach values 0 and 1 without actually reaching them (which in this case
Modalys would refuse). The library function synthesize-for-modalys makes it possible
to control some interesting parameters, such as sample rate and number of channels. At
the end of the process we have a new Modalys file (located in the modfolder, inside the
main OM folder where the application is), containing both the parameters of the previous
file and those controlled by OM, which can now synthesize the sound (figure 5).

Figure 5. Synthesis of a sound with accesses moving in time

Figure 6. Simulation of the brownian motion (sub-patch of figure 5)

147
Paola Livorsi

The same procedure has been applied to obtain soundfiles where two voices (e.g. two
mono soundfiles) strike the same object (in this case a bronze clamped circular plate):
this time the patch structure is doubled, each mono soundfile referring to a dimension of
the object (see figure 7).

Figure 7. Two sound files hit the same object, with moving accesses

Example from the initial Modalys file, with the two sound controllers:

(define sf1
(make-controller ’sound-file 1 0 (const 44100) sfname0 0 sflength0 30))

(make-connection ’force ccpl-bronzo-hit1 sf1)

(define sf2
(make-controller ’sound-file 1 0 (const 44100) sfname1 0 sflength1 30))

(make-connection ’force ccpl-bronzo-hit1 sf2)

(define ccpl-bronzo-out (make-access ccpl-bronzo listen-env ’normal))

The same brownian function has been used to create an envelope for the hybridization
of an object, in this example morphing from an aluminium clamped circular plate into
a brass string: the object is once more struck by the voice (this time the male one), the
technical procedure being far less complex, but still musically effective (see figure 8).

148
Some Applications of OpenMusic in Connection with the Program Modalys

Figure 8. Synthesis of a sound with a hybridation envelope created by OM

A yet richer way of working with soundfiles and hybrids is to focalize on material
that does not change with time, but that has a richer timbre: OM makes it possible to
superimpose the characteristics of two objects of the same kind made out of different
materials, and obtain a third object, which can be excited by soundfiles, as seen above.
Here it is of interest to filter the object, enhancing the harmonics, which are an important
element in the work.
As can be seen in the example below, once the objects have been saved in Modalys,
OM can import and display them as classes: the next step is to connect the two elements,
by using the frequencies of the former with the amplitudes and bandwidth of the latter;
exploiting two objects that are of the same type but made of different materials generates
a stable hybrid, in which the absence of temporal evolution is compensated by richer
harmonics and timbric presence. This can be increased and controlled by adding a filter,
i.e. a text file with certain frequencies injected into the object (see figures 9 and 10).
It is interesting that in this manner rather heavy objects (up to 300-400 modes) can
be created and processed, even if with some reservations: importing them may be slow
and it is better not to save the objects’ icons in the OM patch, so as avoid overloading
the programme.
The same procedure can be used in combination with a soundfile and access struc-
tures as seen above: the number of accesses and of channels may be increased (in the first
example up to 4 accesses and 2 channels, in the second up to 8 accesses and 4 channels,
see figure 11). In the following examples other soundfiles have been employed, respec-
tively a percussion sample (bâton de pluie) and a concrete sound (the crackling of fire);

149
Paola Livorsi

Figure 9. Creation of an object made of two different materials

Figure 10. Sub-patch of figure 9

150
Some Applications of OpenMusic in Connection with the Program Modalys

both choices have to do with themes and materials in the poem: fire and wood appear
frequently, together with iron, and other natural elements. Perse’s world bristles with
materials, sounds and odours, as well as images: for me this was a powerful incentive to
realize the piece with Modalys.

Figure 11. Object made of two different materials, with moving accesses

4 Vocal techniques and object interaction


So far we have seen a few examples of interaction between soundfiles and instrumental
models, the vocal soundfiles mostly consisting of spoken or sung material. These can be
further divided into spoken or whispered voices, isolated phonemes and sung examples,
requiring different techniques. An important reference was Nicholas Isherwood’s research
on vibrato techniques8 , such as throat tremolo, which I used in some parts of the piece
(regarding the electronic part, please see above, event 23, figure 12).

8 Isherwood, N. (1994) Le Vibrato dans le chant contemporain. Mémoire de DEA Musique et musi-

cologie du XXème siècle, Paris (with an essay by LARGE, J., An Air Flow Study of Vocal Vibrato).

151
Paola Livorsi

Figure 12. Event no.23, vibrato techniques, random wide vibrato

Not much needs to be said about the first category, except that spoken (or whispered)
soundfiles are easy to use and are generally effective, sometimes dissolving into the texture
of the object’s resonances, sometimes remaining distinct (obviously this depends on their
respective frequencies).
An interesting case is that of phonemes: for this project I recorded plosive consonants
(such as ‘d’, ‘p’ and ‘k’, which play an important role in Perse’s verse), in order to generate
maximum resonance. The material was then processed by Modalys, using a wrought iron
rectangular plate hit by the phoneme ‘k’: the result is an interesting mixture of percussive
but nevertheless vocal sound, subsequently used in vertical combination, forming a kind
of a chord (see event 22).
Phonemes also play an important role in the realtime part: for example in the first
section of the piece, where ‘s’ and ‘d’ (from os and discorde) go directly to strike the
virtual objects (a cork string in event 1 and an iron clamped circular plate in event 5).
The quasi glissando technique (written in microtonal pitches in the score) is used a
lot in the piece, to create a vocal style close to spoken language: this was also the point
of departure from which the duration values were derived, on the basis of the number
of syllables in Perse’s poem. The numerical structures were then used throughout the
work. This technique was applied more directly to resonance in the final section, in
combination with the skin instruments: here too the amount of energy, and the subtle
frequency variations generate a rich response in the realtime objects, in this case circular
and rectangular membranes.
Vibrato has an important place both in the first and in the last part of Os, first
appearing in the solo (see figure 13). As a vibrato that varies in amplitude and width, it
becomes important in event 23, the last soundfile in the piece before the final, realtime
part.

152
Some Applications of OpenMusic in Connection with the Program Modalys

Figure 13. Vibrato techniques, throat tremolo

In the first section of Os two types of vibrato are used: the tremolo, made up of a reg-
ular swaying movement, almost one tone wide (but still perceived as a single pitch)9 ; and
the throat tremolo (in French ’balancement’), where the swaying movement, generated
by the glottis, is irregular and is accompanied by strong intensity variations10 .
As seen above, in event 23 male and female voices are used in combination, where the
text reads elles épousaient toute colère de la pierre et toute querelle de la flamme11 (they
- e.g. the wind forces - married the anger of stone and the quarrels of fire): with the
image of fire, the voice onstage executes a wide throat vibrato which is a continuation of
the vocal style of the recorded voices. This vocal technique generates a great amount of
fluctuating energy, provoking colourful variations in the resulting soundfiles.

5 Conclusion
As mentioned earlier, the present article is only a rapid survey of what can be achieved by
connecting OM to Modalys. Os has been an important experience for me, one that I wish
to develop further. Os is intended to be a part of a larger cycle of vocal works on Perse’s
poem Vents, Spazi (for vocal ensemble – five voices, and electronics 2000-200512 ). Spazi
is nevertheless a work-in-progress, where I would like to explore the many possibilities of
interaction between human voice and machine.
Program interaction seems to me especially worth developing, providing as it does
subtler ways to widen musical expressiveness.

9 Ibid.,
pp.43-45.
10 Ibid.,pp. 51-53.
11 Perse, S.-J. (1952) Winds, transl. by Hugh Chrisholm, Pantheon Book, The Bollingen Series, New

York.
12 Premiered in Helsinki on June 3rd 2005, Ring Ensemble, Prisma expert meeting 2005.

153
Paola Livorsi
Paola Livorsi, who is a free-lance com-
poser, was born in Italy in 1967, and
has lived in Helsinki since 2001. She
studied composition in Turin, Lyon,
Paris (at Ccmix in 1999 and Ircam in
2000) as well as Helsinki. She gradu-
ated in music history in Turin under
Giorgio Pestelli in 1994.
Livorsi’s works have been performed
at several festivals, in Turin (Settem-
bre Musica 1998), Paris (Agora 2002),
Rome (ControCanto 2002), Helsinki (Musica Nova 2003), Saarbr¸cken (Musik dem
21. Jahrhundert 2003, Klangforum Wien) and Takefu (Japan, Takefu International
Music Festival 2004, Arditti Quartet). Livorsi has received commissions and grants
from Italian and Finnish institutions. Since 1997 Paola Livorsi has been foreign
correspondent for the review Il Giornale della Musica.

155
Fractals and Writing,
Six Fractal Contemplations
- Mikhail Malt -

1 Introduction
This text is intended to show how the notion of fractals may be applied to construct a
”self-similar” form that ”drives” the general evolution of the composition process of a set
of short solo pieces for instrument and electronics.
In an initial phase, this ”guide-line” will act as a ”conductor” for the instrumental
sequence, and will have also an important harmonic function by accentuating the ”poles”
to which the writing will gravitate. In a second phase, the instrumental sequence will
undergo contrapuntal inter-play with the ”guide-line”. As a result, the fractal form will
emerge in the synthesis part of the electronics as well.
The basic tool for constructing fractals will be the Iterated Function System (IFS)
found in OpenMusic’s Om-Chaos 1 library.

2 Context
This strategy was used to create a group of six short pieces for solo instrument and CD,
the Six fractal Contemplations, commissionned by Bagnolet Conservatory for teaching
purposes. These pieces can be played either separately or as a suite.

3 ”Off-Time” formalization
3.1 The basic form
Since the constraint was that the six pieces could be played as a suite, it seemed to
me important that formal conditions be imposed on the sequencing. As a first step, I
established the order in which the pieces were to be played. Guided by the register I
was going to use for each instrument and the timbre relations between the instruments,
I chose the following order: trumpet, flute, alto, guitar,voice, guitar and clarinet. This
order made a possible evolution of registers (figure 1).
The representation of the sequencing of registers as the evolution of two curves offered
another way of considering this evolution (figure 2) and so a third representation emerged:
the evolution of the ”centers” of these registers (figure 3). The latter representation is
used as a basic form to create the ”conductor”.

1 OM-Chaos, Mikhail Malt, Ircam, Paris, 1998.

157
Mikhail Malt

Figure 1. Registers

Figure 2. Registers as curves

Figure 3. The evolution of the registers ”centers” as a curve

However, the curve in figure 3 suffered in my opinion from a lack of ”satisfactory”


dynamics, whether as an isolated piece or as a whole group. It should be noted that this
curve represented for me not only the writing of the notes in space, but also a kind of
evolution of the density of events and the ”drama” of the sequence. Therefore I reversed
the curve (figure 4) that reaches a climax towards the end of the sequence.
Figures 5 and 6 show the patches used to ”reverse” the curve. It should be noted
that in the OM environment the curves have been represented in a BPF (Break Point
Object), an editor in which data can be represented as fuctions by segments. We notice
in the patch of figure 5 that the registers are first shown as a list of associations:

((tr (57 81)) (fl (68 91)) (alt (48 84)) (vx (57 76)) (guit (40 76)) (clr (52 88)))

in which every sub-list contains a first and a second element (a symbol for the instru-
ment) (a sub-list with the ends of the register in MIDI).

158
Fractals and Writing, Six Fractal Contemplations

Figure 4. The reverse curve of the registers ”centers”

Figure 5. Patch reverting the BPF

3.2 Proliferation and self-similarity


The form in figure 4 should show the general evolution of the sequence and the evolution
within each piece. This is where the idea of self-similarity comes in. Here, the evolution
of each piece should be identical (”congruent” might be a better word) to the evolution
of the entire six-piece sequence. For this purpose I used fractal ”proliferation” with IFS2
(Iterated Function System), modules in order to manipulate and create recursive linear

2 The OM-CHAOS library possesses the folowing tools: ifs-lib, IFSx, make-w and app-W-trans in

order to control IFS.

159
Mikhail Malt

Figure 6. Abstraction of the patch for reversing the BPF

systems. This kind of system makes it possible to construct fractal objects and is a way
of generalizing linear transformations in the plane. Indeed, one of the most interesting
modules for composers is make-w. Any musical process that can be formalized by a linear
transformation, such as temporal and-or frequential dilatation and compression can be
calculated using this module. make-w makes it possible to apply the following linear
transformation w(x, y) to any object in the plane (figure 7).

Figure 7.

The expression of w(x, y) [GOGINS  1991] isgiven by  the


 following
  relation:
  
x a b x e x
w(x, y) = (ax + by + e, cx + dy + f ) or = × + =A +t
y c d y f y
The matrix A can be expressed by:
 
r1 cos θ1 −r2 cos θ2
A=
r1 sin θ1 r2 sin θ2

160
Fractals and Writing, Six Fractal Contemplations

The expression of w(x, y) turns into :


   
x x
w =A +t
   y y     
x r1 cos θ1 −r2 cos θ2 x e
w = × +
y r1 sin θ1 r2 sin θ2 y f
where

r1 is a factor applied to the horizontal co-ordinate. If r1 is bigger than one (1), it is a


factor of expansion. If it is smaller than one (1), it is a factor of contraction.

r2 is a factor applied to the vertical co-ordinate. If r2 is bigger than one (1), it is a


factor of expansion. If it is smaller than one (1), it is a factor of contraction.

θ1 is a rotation angle applied to the horizontal co-ordinate,

θ2 is a rotation angle applied to the vertical co-ordinate.


 
e
t is the matrix where:
f
e is a horizontal shift associated in this particular case with a temporal shifting,
f is a vertical shift, associated in this particular case with a transposition.

The following examples help in understanding how the transformation w(x, y) gener-
alizes the concept of linear transformation in the plane.

Figure 8. Linear transformation in the plane

161
Mikhail Malt

Imagine a bidimensional musical structure, defined in a space temps × notes, time


being the horizontal co-ordinate and the notes the vertical one, see figure8 a). Figures 8
b) and 8 c) show the action of parameters e and f , and figure 8 d) shows the effects of
the rotation parameters θ1 and θ2 . Notice that this type of transformation allows us to
formalize contractions and expansions, starting from the medieval and going all the way
to the spectral chords in our century, including the serial rotations of series of the Fifties
[O’CONNARD 1965].
In our particular case, we used a recursive system of ”six” transformations w(x, y)
shown in the diagram below:
       
x 1/6 −8/17 x 0
w1 = × +
y 0 0 y 38
       
x 1/6 −31/51 x 1500
w2 = × +
y 0 0 y 36
       
x 1/6 −12/17 x 3000
w3 = × +
y 0 0 y 20
       
x 1/6 −19/51 x 4500
w4 = × +
y 0 0 y 42
       
x 1/6 −12/17 x 6000
w5 = × +
y 0 0 y 12
       
x 1/6 −12/17 x 7500
w6 = × +
y 0 0 y 24

The parameters of the equations (seen below) were found empirically. However, the
arguments show up the characteristics of the sequence. θ1 and θ1 being zero (cosinus=1)
and r1 (the factor of horizontal contraction) 1/6, the values r1 cos θ1 are equal to 1/6.
The average sequence duration being 150 seconds, the parameter e is equal to 1500 and
the transposition f depends on the intervals between registers.
The construction of the IFS was made with the make-w object (figure 9), which allows
us to construct the transformation function w(x, y) from the linear (r1 , r2 , e and f ) and
angular (θ1 and θ2 ) transformation data. The main patch used is shown in figure 10.

Figure 9. Construction of the IFS

162
Fractals and Writing, Six Fractal Contemplations

Figure 10. Graphic definition of the six w(x, y) functions

The calculation was made in 3 steps, with the following algorithm:

1) basic form

2) curve inversion

3) ”fractalization”, i.e. applying the six w(x, y) functions with the ”IFSX” object.

4) valuation of the curve and if needed return to ”2)”.

Figure 11 shows the 3 steps of calculation, with the final curve inserted into the 2
register curves.

4 ”Harmonic” correction of the curve


The curve calculated in this way could not be used immediately. Although this curve
had a ”self-similar” profile, its values were not necessarily adequate for the harmonic con-
straints I imposed. Each piece had, from a harmonic point of view, twelve main sections.
The harmonic material, a sequence of chords, originates from the chord-sequence type
of analysis (made with Audiosculpt) of an audio excerpt, the recitation of a fragment
from Victor Hugo’s second poem in ”Les Contemplations”:

”...l’homme est un puits où le vide toujours recommence...”

163
Mikhail Malt

Figure 11. The 3 steps of calculation

A description of this process is outside the scope of the present text, so we shall say
only that the material collected by analysis was processed via ”harmonic reverberation”
and split into two groups. A first group containing all the frequencies, to be used for
the synthesis part, and a second one containing only the frequencies that belong to the
tempered chromatic twelve-tone scale, were used to determine the instrumental composi-
tional fields. Figure 12 shows the harmonic reservoir used in the first piece (for trumpet)
containing six main chords and their ”reverberations”. It is important to notice that the
last chord corresponds to the first chord of the second sequence (piece for alto).

Figure 12. The harmonic reservoir of the first sequence (trumpet)

Therefore, if we wish to insert the calculated curve in a writing process, it is of


fundamental importance to ”correct” it harmonically so that it fits the harmonic fields

164
Fractals and Writing, Six Fractal Contemplations

proposed for each section. Figures 13 and 14 illustrate this operation.

Figure 13. Harmonic correction of the self-similar curve

Figure 14. Detail of the omloop module from the patch in figure 13

165
Mikhail Malt

Figure 15 shows the superposition of the two profiles, the original one (in the fore-
ground) and the corrected one (in the background, colored gray). The differences are
obvious, but the global profile remains the same.

Figure 15. Comparison between the ”original” and ”corrected” curves

5 The relationship between the ”conductor” and writ-


ing
Figures 16 to 20 show excerpts from the trumpet sequence (first stave) written in relation
to the fractal sequence (second stave).

Figure 16.

Figure 17.

166
Fractals and Writing, Six Fractal Contemplations

Figure 18.

Figure 19.

Figure 20.

As I mentioned in the introduction, the writing creates a relationship that is either


contrapuntal or that enhances the fractal sequence through synchrony and dephasing.

6 Valuation test for relationships between curves


In order to gain a more global vision of the evolution of both sequences (that written for
instrument as well as the calculated one) we shall superimpose them so as to compare
the ways in which they evolve. Figure 21 shows the patch used to make the comparison.
In A, the object midifile contains a midi file with both sequences. In B, the object
omloop choose-track (figure 22) splits the sound-tracks of the MIDI file. In C , the
abstraction set-bpf-color, which is in lisp mode, contains the following code:

(lambda (self color) (setf (bpfcolor self) color) self)

to assign a respective color to each BPF3 .

3 This function was suggested by Carlos Augusto Agon

167
Mikhail Malt

Figure 21. Patch used to make the comparison

168
Fractals and Writing, Six Fractal Contemplations

Figure 22. omloop choose-track

Figure 23 displays both sequences. The foreground sequence is the written one, and
the sequence in the background, the fractal one. Notice how the ”written” sequence either
follows the fractal sequence or plays against it in contrapuntal mode.

Figure 23. Both sequences

169
Mikhail Malt

Even though figure 23 gives an idea of the relationship between the two sequences,
the details of the written sequence obscure this relationship. In the interests of clarifi-
cation, I ”smoothed out” the curves, using an ”average” filter, described by the following
expression/statement:

i+index
1 X
xi = xj (1)
2 × index + 1
j=i−index

However, before applying this process, both curves must be sampled.

Figure 24. The abstraction bpf-sampleinit

Figure 25. omloop mean-filter2-rec

170
Fractals and Writing, Six Fractal Contemplations

In figure 21 E , the abstraction bpf-sampleinit samples both curves over 300 points
(see figure 24). In H the omloop mean-filter2-rec operates a recursive filtering (see
figure 25), F being the index (i.e. window size for smoothing), and G the depth level of
recursive filtering.

Figure 26.

Figure 26 shows the superimposition of the two sequences (written sequence in the
foreground, fractal sequence in the background), and the ”interplay” between them.

171
Bibliography
[1] BARNSLEY M. : Fractals Everywhere. Academic Press Inc, 1988.

[2] GOGINS M.: « Iterated Functions Systems Music » Computer Music Journal, vol
15, n◦ 1, MIT-Press, 1991.
[3] MALT M.: Chaos - librairie de modèles chaotiques et de fractales Ircam, Paris, 1994.
[4] MALT M.: « Lambda3.99 (Chaos et Composition Musicale) » Troisièmes Journées
d’Informatique Musicale JIM 96, Ile de Tatihou, Normandie, France, 1996.

[5] MALT M.: Les mathématiques et la composition assistée par ordinateur, concepts
outils et modèles Thèse en musique et musicologie du XXème siècle, Ecole des Hautes
Etudes en Sciences Sociales, directeur de thèse Marc Battier, Paris, France, 2000, p.
703-767.
[6] O’CONNELL W. : “Tone Spaces” Die Reihe, n◦ 8 pp. 1965.
[7] PEITGENH. O., JÜRGENS H., SAUPE D.: Chaos and Fractals New Frontiers of
Science, Springer-Verlag, New York, 1992.

Mikhail Malt
Mikhail Malt, has a double formation, both sci-
entific and musical (engineering, composition
and orchestral conducting). After conducting
youth orchestras for ten years, he began his
musical career in Brazil as a flutist and orches-
tral conductor.
He is the author of a PhD thesis written
at the Ecole des Hautes Etudes en Sciences
Sociales on the use of mathematical models
in computer-assisted composition, and a re-
searcher at MINT-OMF Sorbonne Paris IV (Musicologie, informatique et nouvelles
technologies Team, a branch of the Observatoire Musical Français).
Malt currently teaches computer-assisted composition and musical synthesis at Ir-
cam. He is currently pursuing his composition and research activities in the fields of
artificial life models, musical representation and compositional epistemology.

173
Algorithmic Strategies in A
Collection of Caprices
- Paul Nauert -

Abstract. This essay describes the author’s use of OpenMusic to assist in the
composition of a piece for solo piano. The composition process involved two principal
stages. The first stage yielded a rhythmic sketch of the entire piece, using a controlled
random process implemented with functions from the OMTimePack library. The second
stage involved the selection of pitch material suited to the rhythmic sketch; a variety of
functions in the OMPitchField library supported this task.

***

1 Introduction
In late 2001 I began discussions with the pianist Marilyn Nonken about a new composition
for solo piano. The work that resulted, A Collection of Caprices, was completed by me
in August 2002 and premiered by Nonken on October 2, 2002 in Santa Cruz, California.
She subsequently gave several additional performances during a coast-to-coast tour of
the United States; my work shared the program with pieces by Milton Babbitt, Jason
Eckardt, Michael Finnissy, and David Rakowski (all written for Nonken).
A Collection of Caprices is not a set of character pieces; the title alludes instead to
the way in which this continuous, single-movement piece frequently - and capriciously
- shifts character. The title is a phrase used by the American composer Mel Powell
describing a group of his short compositions from the 1960s; the refinement and poise
of Powell’s music represent ideals for which my own work aims, and I wrote the piece
as a tribute to him. My compositional process involved many of the same strategies I
have employed regularly since the mid-1990s, including software tools of my own design
to assist with the organization of rhythmic and harmonic aspects of the work. What was
new in the process this time was the integration of these tools into the OpenMusic visual
programming environment developed at IRCAM by the Music Representations Team.
This essay describes the strategies used to organize A Collection of Caprices and the
role played by OpenMusic in implementing those strategies. My compositional process
over the past ten or so years has involved a fundamental separation of temporal structure
and pitch structure. I habitually complete a rhythmic sketch of a piece, from start to
finish, before returning to measure 1 and beginning to work out pitch choices. (As pitch
details fall into place, I am also concerned with all of the remaining dimensions that are
important to the musical result; but these aspects of the compositional process are the
least structured and I will therefore say the least about them.) This essay begins with a
section devoted to rhythmic strategies and continues with another devoted to harmonic
strategies.

175
Paul Nauert

2 Organizing time
My approach to constructing a rhythmic sketch is generally top-down, working from the
large to the small, and I decided initially that A Collection of Caprices would consist of
11 sections, alternating between longer principle sections and shorter interludes. I chose
section durations from the geometric series 12 × (49/40)n = {. . . , 12.0, 14.7, 18.0, . . . }.
Figure 1 illustrates the sectional plan.

Figure 1. Overview of rhythmic sketch, showing section durations (top), plan for stochastically
generated rhythmic streams (layers I and II at bottom) and additional derived material (layer
III at bottom)

Figure 1 also shows the design according to which rhythmic details for the piece
were generated. The OMTimePack library in OpenMusic provides tools for generating
streams of durations with specific statistical profiles. When designed appropriately, these
profiles determine characteristic ”rhythmic behaviors”; in this case, I worked with four
contrasting profiles, corresponding to behaviors I named fragmented motions, continuity,
whisps, and supple. The design is organized into three layers, which correspond partially
to layers in the texture of the finished composition. Reading across the row for layer I,
the plan begins with a stream of continuity, maintaining this profile throughout sections 1
through 3. A gradual interpolation from the continuity profile to the fragmented motions
profile takes place across section 4, with a corresponding gradual shift in the behavior of
the rhythmic stream. Once the new fragmented motions profile is fully established at the
beginning of section 5, it is maintained throughout that section. Layer I becomes inactive
at the beginning of section 6. The rest of the overview can be read in the same fashion.
(Layer III is derived from transformations of parts of layer II, according to procedures
described further on.)
Figures 2 and 3 show excerpts of the detailed rhythmic sketch that I generated accord-
ing to the plan discussed above. Figure 2 is from the beginning of the sketch, illustrating
the continuity behavior in layer I and the fragmented motions behavior in layer II. The
contrast between these behaviors is somewhat hard to judge in such a short excerpt. The
continuity behavior involves a great deal of motion at a rate of two to three attacks per
beat, with frequent excursions into more rapid zones of activity. In contrast, the frag-
mented motions behavior moves more often at a rate of one or two attacks per beat, and
its excursions into rapid terrain are briefer and occur less often. Both behaviors reach
occasional points of repose on longer durations.

176
Algorithmic Strategies in A Collection of Caprices

Figure 2. Rhythmic sketch, mm. 1-13

Figure 3 shows the entirety of section 4, plus the immediately surrounding material
(layer I only). A gradual shift in behavior from continuity to fragmented motions occurs
across this section, although the compressed scale of the section, and the presence of a
couple of very long durations within it, make this shift a little more difficult to follow.
Despite this difficulty, the passage does what I needed musically, ushering in material
that moves with a greater sense of momentum.
In order to illustrate what becomes of this rhythmic material, Figure 4 provides the
opening of the completed score, which corresponds to the sketch given in Figure 2. In
this case, the multiple layers of the sketch become quite tangled in the completed score,
although repeated pitch cells can be seen emerging in one layer or the other at various
points (this is perhaps most clear in m. 10, where contrasting cells are seen in each layer).
To generate the rhythmic streams on which A Collection of Caprices is based, I made
extensive use of OpenMusic’s OMTimePack library, a specialized collection of tools for
creating and manipulating rhythms. The generative engine at the core of this library is
the function marktime, which creates a rhythm by repeatedly selecting a duration event
at random from a pool of possible events and appending it to the end of the resulting
rhythm until a specified total duration is reached. As its name suggests, this function
involves a Markov process: each new selection is made with the possible events weighted
according to the last few selections already made.
A Users Guide and function-by-function Reference for the OMTimePack library is
included along with the software itself in the materials distributed by the IRCAM Soft-

177
Paul Nauert

Figure 3. Rhythmic sketch, mm. 87–107

ware Forum. Here I will focus on aspects of the marktime function and the role it played
in the creation of A Collection of Caprices.
The marktime function can assemble rhythms using both simple duration events
(each one a single duration) and compound duration events (each one a ”cell” formed by a
specific string of durations). And it can operate as a zeroth, first, or second order Markov
process (event probabilities at each step being static or depending on the preceding one
or two selections). I exclusively used simple duration events and first order Markov
processes to construct the rhythmic streams I needed.
For the benefit of readers who are unfamiliar with the basic operation of a first
order Markov process, this paragraph describes a simple example. More experienced
readers may proceed to the next paragraph, which returns to the specifics of my compo-
sition. (Ames, 1989 provides a good general introduction to compositional applications
of Markov processes; its primary emphasis is not on rhythm construction.) Figure 5
presents two events, e and h; assigns probabilities to them; and shows a rhythm built
with them. The probabilities are static in this example - each selection is made with
a 30% chance of choosing e and a 70% chance of choosing h - so it represents a zeroth
order Markov process. Figure 6 replaces the static probabilities with probabilities that
vary according to which event was most recently selected: if the previous choice was e,
then the next selection is made with an 80% chance of choosing e and a 20% change of
choosing h; but if the previous choice was h, then the next selection is made with an equal
chance of choosing either event. A process in which probabilities are conditioned like this
is called a first order Markov process. Figure 6 also shows a rhythm built according to
this process.
The OpenMusic marktime function requires four input parameters: event-space rep-
resents the pool of duration events for random selection; p-table controls the Markov
process of random selection; tot-time specifies the desired duration of the output rhythm,
and init-conds provides context for the first few random selections. To generate the

178
Algorithmic Strategies in A Collection of Caprices

Figure 4. A Collection of Caprices, mm. 1–13

Figure 5. (a) Static probabilities; (b) representative output

rhythms used in A Collection of Caprices, I constructed the following pool of simple du-
ration events1 : 0.09, 0.12, 0.37, 0.75, 1.2, 1.88, 2.88, 4.67, 8.25. These values provide the
possibility of two contrasting rates of very rapid motion (based on strings of 0.09 or 0.12),
two additional durations that are still qualitatively short (0.37 and 0.75), enough choices

1 Each one a single duration expressed as a fraction of a quarter note at the tempo M.M. q = 76

179
Paul Nauert

Figure 6. (a) Probability table for first-order Markov process; (b) representative output

in the medium-short through medium-long range to provide variety, and two contrasting
durations that are sufficiently long to serve as points of repose (4.67 and 8.25). All of
the rhythms in layers I and II of the rhythmic sketch for A Collection of Caprices were
generated by random selection from this pool.
Once the event space was established, I designed a two-dimensional table for each of
the four rhythmic behaviors called for in my initial plan for the rhythmic sketch. Figure
7 illustrates the tables corresponding to the fragmented motions and continuity behavior
patterns.

Figure 7. Probability tables: (a) fragmented motions; (b) continuity.

Figure 8 shows an OpenMusic patch using marktime with inputs configured to re-
produce layer I of the opening of the rhythmic sketch for A Collection of Caprices. The

180
Algorithmic Strategies in A Collection of Caprices

patch is almost trivially simple, because marktime is specialized to accomplish exactly


the task called for in this patch. The event-space is the pool of nine durations describe
earlier, and the continuity p-table is organized as in Figure 7(b). The total duration,
165.7, is the sum of the durations planned for sections 1, 2, and 3, because the rhythmic
behavior of layer I is determined by the continuity profile throughout these sections. I
no longer have a record of the actual value supplied to the init-conds input. This is
the value that marktime uses in place of a ”previous selection” at the very beginning of
the process, before any selections have actually been made. The value 2 is a good guess
here, because it sets up the process in such a way that the third row of the continuity
table determines the probability of each event; in this row, the event 0.37 is assigned a
high probability of 58%. It can be seen in the resulting rhythmic sketch (Figure 2) that
this event was the one actually selected by marktime - 0.37 after quantization appears as
a triplet eighth note. (Of course, because marktime implements a random process, the
output ldur won’t reproduce the rhythms in my composition but will constitute another,
statistically similar rhythmic stream.)

Figure 8. OpenMusic patch used for the beginning of the rhythmic sketch (layer I).

The empty input sockets in Figure 8 are for optional parameters that allow the genera-
tive process to evolve over time in a variety of ways. At the beginning of my composition,
the rhythmic behaviors remain fixed, so these optional parameters are left unused. But
during section 4, the rhythmic behavior of layer I evolves gradually from continuity to
fragmented motions. To create this gradual shift, (and a more protracted one in section
9), I supplied a second probability table to the evol-table input of marktime. The
specifics of this patch are detailed in Figure 9. The marktime module at the left is re-
sponsible for the rhythms in sections 1–3, with static continuity behavior. The marktime
module at the right is responsible for the rhythms in section 4, which evolve from con-
tinuity to fragmented motions. In this second marktime module, the continuity table
is supplied to the p-table input and the fragmented motions table is supplied to the

181
Paul Nauert

evol-table input. This causes the module to use values from the former table at the
beginning of the process, to use values from the latter table at the end of the process,
and to interpolate between these beginning and end values over the course of the process.
(The rightmost socket of this module is empty, so interpolation occurs with the default
linear shape). Finally, the first module’s fin-conds output reports the context at the
end of section 3; feeding this value to the init-conds input of the second module ensures
that the transition from section 3 to section 4 in the rhythmic sketch is seamless.

Figure 9. OpenMusic patch used for the first four sections of the rhythmic sketch (layer I).

Layers I and II of the rhythmic sketch for A Collection of Caprices were generated
entirely according to the techniques outlined so far. To produce layer III, I began by
processing a copy of the data from layer II, sections 9–11. Using simple arithmetic
modules in OpenMusic, I rescaled this data so that it spanned sections 5 through 9.
The behavioral profile of this data, supple, focuses on a moderate rate of motion and
makes no excursions into very rapid territory; and the scaling applied to it slowed it
down further. In an early conception of the piece, I intended to use this slow material as
is. But on further reflection I felt I needed greater rhythmic activity in the middle of the
composition. So I worked outside the visual-programming environment of OpenMusic
and cobbled together some functions directly in Common Lisp that take an existing
rhythmic stream, make controlled random selections from a pool of rhythmic cells, and
return a new stream formed by embedding the randomly selected cells into the durations
of the original stream. This technique resembles one I am developing for the next release
of OMTimePack.

182
Algorithmic Strategies in A Collection of Caprices

As I worked to craft the rhythmic sketch into a completed composition, I decided


to make one drastic alteration to the original plan. The conclusion of section 5 shaped
up to be a passage that develops strong forward momentum. In my plans, this buildup
was followed by an interlude (section 6); but I felt that it made more dramatic sense
to proceed directly to the next principal section (namely, section 7). Therefore I simply
deleted all of section 6 and made minor adjustments to fix up the ”seam” between sections
5 and 7.

3 Organizing pitch
Because the entrances and exits of rhythmic layers, and the changes of rhythmic behavior
within each layer, are timed according to the 11-section structure of my composition,
these higher-order rhythmic events help to articulate its sectional plan. At some of the
sectional boundaries, however, there are no higher-order rhythmic events (e.g. the starts
of sections 2 and 3) or only subtle ones (the beginning or end of a gradual behavioral
change, e.g. the start of section 4, where the beginning of a transition from continuity
to fragmented motions is not immediately obvious).
The strategies I have developed for pitch organization begin with my concern for
contributing to the articulation of large-scale structural divisions. Specifically, I use
fixed-pitch formations called ”pitch fields” in several pieces - including A Collection of
Caprices - to create a sense of uniformity within each section and contrast between
sections. Within each section, a structural ”middleground” emerges when subsets of the
field are projected as harmonic units. So there is note-to-note activity in the foreground,
chord-to-chord activity in the middleground, and field-to-field activity in the background.
Figure 10 traces these different levels in a passage spanning the end of section 1 and the
beginning of section 2.
The boundary between sections 1 and 2 coincides with the pitch-field change indicated
on the lowest pair of staves in Figure 10. In the middle level of the same Figure, the basic
harmonic plan is laid out as a series of chords. The chord level is reproduced more or less
as it appears in my sketches. The foreground activity in the completed score sometimes
breaks these chords into smaller units, and other pitches from the field are sometimes
worked into the texture as well. Despite its volatility, the harmonic middleground is
essential to my conception of the entire piece. The remainder of this portion of the essay
offers a closer look at the OpenMusic patches I used to organize this chord-to-chord level.
The reader is assumed to have some familiarity with basic set-theoretic models of pitch
as presented in (Forte, 1973), (Rahn, 1980), (Morris, 1987), and (Lewin, 1987); readers
unfamiliar with this or similar work can still read on for a general idea of my strategies.
Figure 11 shows a patch with the ”master configuration” that assisted my harmonic
choices throughout A Collection of Caprices. During the gradual process of developing
the rhythmic sketch into a completed composition, the master configuration remained
constant while I made repeated changes to the data supplied within each subpatch and
to some of the particular functions that were plugged in to these subpatches. The master
configuration can be read in stages, which the essay will trace through twice: first to
provide a general overview, and then in more detail, to show more precisely how data flows
through the network. During this second pass, we will look inside the main subpatches.
But we begin with the overview. The first stage of the master configuration assembles
a pool of pitch class sets (pcsets) according to criteria for interval class (ic) content;

183
Paul Nauert

Figure 10. A Collection of Caprices, mm. 31–38, with underlying chords and pitch fields

these criteria constantly shift as I work, as a result of relatively improvisational decision-


making, informed by previous harmonic choices and details in the rhythmic sketch. The
second stage filters the pcset pool according to criteria for pc content; again, the specific
criteria shift according to a series of more-or-less spontaneous decisions. In a separate
branch of the network, a pitch field (resembling those depicted above in Figure 10) is
constructed; this field determines which pitches are available for chord-building. The
two branches of the network converge as inputs to the function find-pcset-in-field,
which determines, for each pcset in the filtered pool, all the corresponding pitch sets; the
results of this search constitute a pool of pitch sets. (Given a pcset X and a field F, a
corresponding pitch set is a subset of F that contains exclusively one instance of each pc
in X.) The final significant stage of the master configuration filters the pool of pitch sets
according to criteria for pitch and interval content. The result of this stage is a short list

184
Algorithmic Strategies in A Collection of Caprices

Figure 11. Master configuration of the patch used to assist harmonic choices

of chords, focused according to multiple criteria; feeding this list to a chord-seq factory
allows me to view it in conventional music notation and audition it via MIDI. I make
harmonic choices from these short lists, or adjust the criteria and try again if none of the
choices satisfies me.
Let us take a closer look at how each stage of the master configuration behaves.
Figure 12 provides details about the construction of the pcset pool. This subpatch begins
with a list of pcsets, each of which represents a family of transpositionally equivalent
pcsets. As readers who are familiar with the twelve-tone landscape may know, there
are 66 such families of pentachords and 80 of hexachords. The next stage within the
subpatch sorts these 146 families in order of decreasing similarity to a target pcset,
0, 2, 3, 5, 6, 8, and selects the first fifth of the sorted list - the 29 families with the
strongest intervallic resemblance to the target. The similarity computation makes use of
the ”interval angle” measure proposed in (Scott & Isaacson, 1998). The final stage of this
subpatch expands the result by replacing each family’s representative with a complete
list of family members, yielding a list of 336 pcsets at the output. (Without the sort-and-
select procedure, the raw list of pentachord and hexachord families would have expanded
into a total of 1716 pcsets.)
Figure 13 illustrates the subpatch responsible for filtering our pool of 336 pcsets
according to specific criteria for pc content. The specialized function filter-chordlist
performs the filtering, and the criteria controlling this process are defined here by a pair of
make-pc-vldg-test modules The ”vldg” in this name stands for voiceleading, and these
modules can be seen as tools for testing chords according to the voiceleading connections

185
Paul Nauert

Figure 12. Generating the pcset pool

each one has to some reference pcset. Specifically, the module on the left tests whether
each pcset in the pool contains at least 2 and at most 2 pcs that form ic 0 relative to
some pc in the set {2, 9} - in other words, it tests whether each pcset contains both pcs
2 and 9. Similarly, the module on the right tests whether each pcset contains at most
one pc in common with the reference set R = {4, 6, 8, 11} and also whether each pcset
contains at least 2 and at most 4 pcs that form ic 1 relative to R. The mathematical
model of voiceleading underlying these computations is described at length in (Nauert,
forthcoming). Of the 336 pcsets in the input pool, 11 pass through the filter: {9, 11, 0,
2, 3}, {9, 11, 0, 2, 5}, {9, 10, 0, 2, 3, 6}, {7, 9, 10, 1, 2, 4}, {0, 1, 2, 3, 6, 9}, {9, 10,
0, 2, 3}, {9, 11, 2, 3, 5}, {9, 0, 2, 3, 4}, {9, 10, 0, 1, 2, 4}, {7, 9, 10, 11, 1, 2}, and {9,
11, 0, 1, 2, 5}. (Reviewing the multiple criteria in operation so far, these 11 pcsets are
pentachords and hexachords that are intervallically similar to {0, 2, 3, 5, 6, 8} and have
particular voiceleading relationships to {2, 9} and {4, 6, 8, 11}.)
To keep the presentation streamlined, this discussion omits details about the subpatch
responsible for constructing pitch fields. Readers who are interested these details can
study the OpenMusic documentation for the main function at work within that subpatch,
make-pfield; the structure and uses of different kinds of pitch fields are considered at
length in (Nauert, 2002). For the purpose of continuing our detailed trace through the
master configuration we will use the same pitch field used throughout the opening of
section 2 of A Collection of Caprices. This field appears at the bottom of Figure 10.

186
Algorithmic Strategies in A Collection of Caprices

Figure 13. Filtering the pcset pool

Note for instance that the only instances of the pc G in this field are the pitches G1
and G5, so if a pcset contains G, each corresponding pitch set will contain G1 or G5.
Despite the constraining effects of the pitch field, the pitch sets corresponding to our 11
pcsets multiply rapidly. One of our 11 pcsets is {9, 11, 0, 2, 3}, and one of the pitch sets
corresponding to this pcset is {A1, B2, C3, D5, E5}, as readers can verify by locating
these pitches in the appropriate field in Figure 10. The find-pcset-in-field function
in the master configuration finds this pitch set and 1826 others. Thus the filtered pool
of 11 pcsets corresponds to a pool of 1827 pitch sets.

Figure 14. Filtering the pitch set pool

187
Paul Nauert

The final stage of the master configuration filters the large pool of pitch sets as
illustrated in Figure 14. This subpatch consists of two stages. First, the pitch set pool is
filtered according to criteria specified in an additional subpatch named P test. Next, the
results of this initial filtering stage are then sorted in descending order according to the
lowest pitch of each chord, and the highest third of this sorted list is sent to the output.
Figure 15 shows the P test patch that controls the first of these two stages. Filtering
is based on a combination of three tests. The make-spacing-test module requires each
chord to be spaced such that the interval from the lowest to the second lowest note is at
least 5 and at most 21 semitones, and such that the interval from the second the third
lowest note, from the third to the fourth lowest, and so on, is at least 2 and at most 11
semitones. The make-p-vldg-test module requires at least 1 and at most 2 notes in
each chord to form interval 0 relative to some pitch in the set {14, 21} - in other words,
it ensures that each chord contains D5, A5, or both. Finally, the make-inclusion-test
requires each chord to contain at least 1 and at most 3 transpositions of the pitch set {0,
14, 21}. Only chords meeting all three requirements pass through the filter. Together,
the filtering, sorting, and selecting procedures depicted in Figure 14 reduce the pitch set
pool from 1827 chords to just the 8 depicted in Figure 16.

Figure 15. Combination of three tests for filtering the pool of pitch sets

Figure 16. The result: eight chords meet criteria established throughout the master configu-
ration

188
Bibliography
[1] Ames, C. : The Markov Process as a Compositional Model: A Survey and Tutorial.
Leonardo 22.2: 175–187, 1989.
[2] Forte, A. : The Structure of Atonal Music. Yale University Press, New Haven,
Connecticut, 1973.
[3] Lewin, D.: Generalized Musical Intervals and Transformations. Yale University
Press, New Haven, Connecticut., 1987.
[4] Morris, R. : Composition with Pitch Classes: A Theory of Compositional Design.
Yale University Press, New Haven, Connecticut, 1987.
[5] Nauert, P. : “Field Notes: A Study of Fixed-Pitch Formations”. Perspectives of New
Music 41.1: 9–60, 2003.
[6] Nauert, P.: The Progression Vector: Modelling Aspects of Posttonal Harmony. Jour-
nal of Music Theory (forthcoming).
[7] Rahn, J. : Basic Atonal Theory. Longman, New York, 1980.
[8] Scott, D., and Isaacson E.J.: “The Interval Angle: A Similarity Measure for Pitch-
Class Sets”. Perspectives of New Music 36: 107–142, 1998.

Paul Nauert
Paul Nauert holds degrees from the Eastman School of
Music, where he was awarded the McCurdy prize in
composition, and Columbia University, where he earned
his Ph.D. in music theory in 1997 with the assistance of
a Mellon Foundation Fellowship. His principal composi-
tion teachers include Joseph Schwantner, Robert Mor-
ris, and Fred Lerdahl. On the faculty of UC Santa Cruz
since 1996, Dr. Nauert has recently held visiting posi-
tions at IRCAM in Paris and Columbia University in
New York. As a composer, he has worked with interna-
tionally prominent soloists (including pianist Marilyn
Nonken and guitarist David Tanenbaum) and ensembles (including the Peabody
Trio and NOISE). His scholarly publications on aspects of pitch and rhythmic orga-
nization in contemporary music appear in Perspectives of New Music, the Journal
of Music Theory, and The Musical Quarterly, and his software tools for algorith-
mic composition are published through the IRCAM Software Forum. The latest
information about Dr. Nauert’s work as a theorist and composer can be found at
http://arts.ucsc.edu/faculty/nauert/.

189
Sculpted Implosions: Some
Algorithms in a Waterscape of
Musique Concrète
- Ketty Nez -

Abstract. Sculpted Implosions, for live French horn and 8 track tape, was written at
IRCAM as a cursus project during 1998-99. This personal introduction to programming
and computer music was an ongoing exploration - of electroacoustic sound treatments,
algorithmic procedures, and real-time processes - using software developed at IRCAM,
including OpenMusic, AudioSculpt, and Max/MSP.

***

For this piece, algorithms to control pitch and rhythm were written using OpenMusic,
further developing compositional techniques already present in the author’s instrumental
music. The resulting MIDI information was then transferred into breakpoint filter pa-
rameters for further treatment in AudioSculpt, using Hans Tutschku’s library OM-AS1 .
The sounds which were filtered were prepared using standard concrète treatments. All
from nature, these were various samples of water, e.g. ocean waves, fountains, and rain.
As a ”clin d’oeil” to the notion of the corporality of a live performer, and in particular
the French horn as a (rather watery!) brass instrument, samples of teeth-brushing and
gargling were also included. These were subjected to time stretching, pitch shifting, and
cross synthesis using both AudioSculpt and Max/MSP. Resulting individual sound files
were spatialized in circular rotations at varying speeds and recorded out into 8 tracks,
using the Max/MSP Spatialisateur software. During performance, the live French horn
is similarly ”rotated” over the same 8 loudspeakers.
The French horn plays an obligato which floats in and out of the watery sounds, doleful
riffs of oddly remembered nineteenth-century orchestral quotes from Mahler, Strauss, and
Wagner, lost on a surreal ocean voyage. The author soon realized, however, that the tape
part could well represent a composition on its own, thus producing a second version of
only the 8 tracks which is often performed in concerts, Sculpted Implosions II. This
paper will discuss the OpenMusic algorithms used to create the tape part, giving some
example patches of processes, as well as briefly touch on the AudioSculpt and Max/MSP
processes used. The compositional ”journey” for this piece began with modifications to

1 First released at the November 1998 IRCAM Forum. This library is explained in Hans Tutschku,

L’application des paramètres compositionnels au traitement sonore dans la composition mixte et élec-
troacoustique, in PRISMA 01 (Florence: EuresisEdizioni, 2003), 77-116. Also, processes specifically
from Sculpted Implosions are discussed in his dissertation, L’application des paramètres compositionnels
au traitement sonore (Formation Doctorale Musique et Musicologie du XXe siècle, Université Paris-
Sorbonne (Paris IV), Ecole des Hautes Etudes en Sciences Sociales Paris, 1999).

191
Ketty Nez

the simple harmonic series starting from E[, the lowest note of the piece, and chosen
in reference to the infamous and luxuriously orchestrated E[ pedal tone of Wagner’s
prelude to Das Rheingold. As pedal tone, this occurs at the beginning and penultimate
sections, featuring rich low sounds multiply layered, slightly detuned, slowly glissandoed,
and treated by slowly-opening low-pass filters.
To give pitches for various simultaneities, variants of the E[ series were derived from
intervallic modifications to successive intervals. Thus 1, 2, 3, and 0 semitones were
successively added to consecutive intervals, as well as rotations of this set and subsets of
this set, [2, 3, 0, 1] and [3, 0, 1, 2] etc., or [0, 1, 2]. In figure 1, the E[ harmonic series
has been progressively altered by rotations of semitones [0, 1, 2, 0, 1, 2, . . ], [1, 2, 0, 1,
2, 0, . . . ] and [2, 0, 1, 2, 0, 1, . . . ].

Figure 1. Morphing the E[ harmonic series with progressive changes of 0,1 and 2 semitones

Inspired by music for the Indonesian gamelan, where simultaneous attacks of the
various metallophones delimit structural points and produce rich resonances saturated in
overtones, these clusters formed a slow chaconne which progressed throughout the entire
piece. They were placed at precisely spaced intervals, a colotomic division of form into
sections. Each section featured one or more of several types of processes which moved
slowly between the different clusters framing the edges, and the compositional choice was
made to quantize to the pitches to the nearest 100 midicents as contrast to the various
microtonal interpolations which were constructed to span between them.

Figure 2. Self-onlist process

192
Sculpted Implosions: Some Algorithms in a Waterscape of Musique Concrète

One such interpolation process carved out successively smaller subsets of a cluster,
the onlist function of omloop. Onlists of pitches were sorted in ascending order to give
increasingly higher and higher subgroups, suggesting the ascending notion of a harmonic
series. For further textural interest, alternate tones from the same chord were selected
to create a ”trill”: tones index [0,2,4,6 . . ] with index [1,3,4 . . ], both subsets onlisted
as well. To form a connection with the next chord, the process were reversed resulting
in a descending reverse-onlisted ”trill.” Juxtaposing two such ”trills” thus gave an arch
profile, with ascending and descending contours (see figures 2 and 3).

(a)

(b)

Figure 3. (a) Self-onlist of the E[ harmonic series. (b) Self-onlist linking two clusters

More standard interpolations between two sonorities were used. In a specified number
of steps, tones of corresponding index number between two clusters were spanned (see
figures 4, 5 and 6).
However, for variety, this regular interpolation process was carried out in several
stages. For example, an intervening ”hybrid” chord was formed from specific tones of
chord A which moved to specific tones of chord B, but not necessarily of corresponding
index position. In figure 7, the tones of index [1, 2, and 3] were interpolated to tones [3,
4, 5] of chord B, the other tones of chord A remaining unchanged. Then interpolation to
the following B chord was linear, i.e. with corresponding index positions (see figure 8).
Instead of a hybrid chord dividing interpolation into stages, an interval could be placed
between sonorities A and B. This created a wedge profile: tones of A were constrained
to move to the closest tone of the intervening interval. This interval then itself fanned
back out to tones of B (see figure 9).
Using this same ”bipart interpolation” algorithm, but reversing the ordering of either
A or B, a swivel reversed the mapping of index numbers. Now, traversing registers, each
note of one cluster was interpolated to the ”other” note of the interval, i.e. that not
closest, and back out again. The highest note of A was interpolated to the lowest note
of B, similarly, the next-highest of A to the next-lowest of B, etc., as shown in figure 10

193
Ketty Nez

Figure 4. Interpolation patch

Figure 5. Interpolation omloop

Figure 6. Interpolation process

194
Sculpted Implosions: Some Algorithms in a Waterscape of Musique Concrète

Figure 7. Hybrid interpolation patch

Figure 8. Hybrid interpolation

(NB: the interpolation patches were shown in figures 4, 5 and 6).


Two adjacent clusters were occasionally directly interleaved [A B A B . . . ]. However,
each cluster was constrained either to grow out of or collapse into a single note, variations
of omloop’s onlist process created by various ordering of the indices of chord tones. The
single ”target” tone was chosen variably to be the highest, lowest, or median tone. This
produced for each cluster a wedge of differing triangular shapes, and those of two adjacent
clusters were interwoven attack for attack (figure 11).
In variation to interweaving, movement from iterations of cluster A to those of cluster
B were shaped with controlled amounts of randomness added to repetitions of subsets of
the chords (figure 12). The minimum and maximum number of tones possible for each
attack were specified, for each subsection of this ”gamelan” tintinnabulation.
Shown in in figure 13, given a total of x attacks to move from A to B, the omloop
partial_sums found the highest number y of an arithmetic which would yield a sum of
terms equal or smaller than x. This y was then divided by two to account for sampling
from both the A and B collections. Subsequently, omloop two_series created two arith-
metic series of [1, 2, . . . y/2] and its reverse [y/2, y/2 - 1, . . . 1]. These represented
the number of times for omloop choice to chose, respectively, from the pool of A and B
clusters: 1 tone from A, y/2 from B, then 2 from A, (y/2 - 2) from B, etc. (figure 14).
The combined results from A and B, eg. 1 selection from A and y/2 selections from B,
were randomly permuted among themselves to avoid any audible linear predictability.
Associated with the various types of pitch interpolations between clusters were convex

195
Ketty Nez

Figure 9. Bipart interpolation patch

Figure 10. (a) Swivel interpolation: chord-interval-chord. (b) Single line interpolation reversing
registers.

or concave tempi curves of attack times and durations. Growing either shorter then
longer, or vice versa, these time-attack arches were calculated from sampling exponential
growth and decay breakpoint functions, BPF objects in OpenMusic. Gauss sampling was
used to add a controlled amount of random variation to each sample of a BPF curve,
using the gauss object from the OMAlea library of chaos functions. To create a concave
shape, two exponential curves were simply juxtaposed, one in reverse of the other, and
inverting this arch gave the other shape (see figure 15).
As shown in the patches of figure 16, the minimum and maximum possible durations of
attacks for each section were specified, as well as number of subsections desired. Pairs of
durations were generated for each such subsection [x, x × y], y representing the amount of
”bend” in the curve, i.e. max deviation from x, for example 1.5, or 0.5. The y bend-values
themselves changed for each subsection, and were derived from sampling an arithmetic

196
Sculpted Implosions: Some Algorithms in a Waterscape of Musique Concrète

Figure 11. Interweaving two onlisted clusters

Figure 12. Moving from cluster A to cluster B

series.
Using these ”bend duration pairs”, series of durations was generated spanning in value
from x, to x × y, back to x. The durations of attacks for each subsection were summed,
and compared to original subsection lengths, which had been derived by sampling a
BPF curve of exponential growth or decay. Any extra attack durations were discarded.
To differentiate between increasing or decreasing subsection lengths, the output BPF-
sampling of the exponential curve was simply reversed.
Successive lengths of the composition were successively shortened until a climactic
middle ”interruption” by a whooshing glissando, a clin d’oeil/oreille to the tradition of
musique concrète. Then sections were lengthened until a reprise of the opening material
of the very-low E[ pedal point and irregular glissandos. The form of the entire work thus
suggested a ”wedge”, correlated with changess in frequency content of the types of sounds
being filtered for each section.
Starting from lower-frequency sounds, e.g. ocean waves, AudioSculpt cross-synthesis
between adjecent sections of with progressively higher-frequency material continued un-
til the middle section. Here with the highest frequency sounds were used, e.g. teeth-

197
Ketty Nez

Figure 13. partial_sums omloop

Figure 14. choice omloop

brushing. Then the entire process was reversed. Thus, in terms of sonic material [A,
A+B, B,..., M+N, N, N+M, ..., B, B+A, A].
The results of the above algorithmic processes, all MIDI information, were imported
into Audio Sculpt using the seq-to-fbreakpt object in the OM-AS library2 . This object
interprets pitch and rhythm information as textfiles of parameter vectors to be used for
breakpoint filtering. Subsequently, the textfiles are designated as the parameterfiles for

2 Tutschku, the developer of the OM-AS library, has commented that this library permits one to sculpt

sounds by applying formalized parameters, in reversal of the spectral approach to composition of the
1970s, which sought to apply acoustic phenomena to instrumental writing. L’application des paramètres
compositionnels au traitement sonore dans la composition mixte et électroacoustique; in PRISMA 01
(Florence: EuresisEdizioni, 2003), 79.

198
Sculpted Implosions: Some Algorithms in a Waterscape of Musique Concrète

Figure 15. Creating the tempi curves

Figure 16. Concave and convex arches tempi curve patches

SVP commands addressing the phase vocoder engine behind AudioSculpt, whose interface
is not addressed. This object’s input parameters include fft-size, filter bandwidth, scaling
the maximum and minimum input MIDI velocities to output dB values, scan speed, i.e.
how fast breakpoint curves are sampled, and random variation of this scan speed. For
Sculpted Implosions, the bandwidth was kept very tight, around 10 hz, to produce clearly
pitched articulations.
In addition to filtering, the MIDI information from some of these interpolative pro-
cesses were also used to play SampleCell. This sampler was filled with sounds resulting
from modifications of the Cross Dog AB patch by Zack Settel and Corte Lippe. Using
fft∼ and ifft∼ objects of max/MSP, amplitude and phase information was exchanged

199
Ketty Nez

between different articulation types of the same note played on the French horn, eg.
with mutes of different types, unmuted, flutter tongued. The wavering sounds were not
dissimilar to viola di gamba tremoli and effectively cut through the filtered sounds, sug-
gesting another textural layer. Starting with the second half is another process which
stands out in relief by both compositional and sonic means, a slow-moving chaconne of
mid-register sonorities subject to successive steps of microtone detuning. Horn samples
of individual tones were analyzed in Audio Sculpt to determine the exact fundamental
frequency before subject to the Max/MSP cross-synthesis described above.
Based on the 7th chord CFGBD, clusters were constructed from calculating pitch
transpositions maintaing each of these tones fixed as ”pedal.” Each such ”morphed” cluster
was the result of successive shifts of x/4 tones as well as 1/8 tones, a kind of progressive
”stretch” upwards. By again sampling an exponential growth BPF curve, progressively
lengthening attack times were chosen, coordinating with the lengthening sections of this
second ”chaconne” at the second half of the piece, to give an overall impression of ”sinking”
back down to the low E[ pedal texture.
To calculate the changes of transpositions for these clusters, eg. keeping C fixed at
6000 midicents, F was shifted 1/4 tone (= 50 midicents), G 2/4 tones (= 100) and D
3/4 tones (= 150). Then each of these was detuned +1/8 and -1/8 tone (-25 and +25
midicents). Another group of chords, still keeping C as pedal, now shifted the other tones
by multiples of 3/4 tone: a shift up of 3/4 tones (150) for F, 6/4 tones (300) for G, 9/4
tones (450) for B, 12/4 tones (600) for D. This process once again was repeated using
multiples of 5/4 tones, i.e. 250 midicents (see figure 17 and table 1.).

Figure 17. Horn Chaconne: successive 1/8 tone steps of two series with C and with F as pedal
tones

These detuned mid-register ”chaconne” clusters reappear in the final section of the
piece, chosen continually at random by an aleatoric ”music box” which added random
amounts of frequency shift. The audio output of this Max/MSP patch was treated with
external reverberation using an effects box to add sizzle.
The Spatialisateur module of Max/MSP and its GUI interface were used to control
dynamic movements. Using a feature of this software package, positions and rotations of
a sonorous object, the soundfiles themselves, were controlled by mouse, and the output
was recorded onto eight tracks. Using the circular GUI and hand movements of the
mouse, sounds were made to approach from and recede back to large distances compared
to putative speaker locations.
Rotation speeds were automated with lists for the Max object line. Rotations were
calculated to form Fibonacci ratios with each other, becoming increasingly faster and

200
Sculpted Implosions: Some Algorithms in a Waterscape of Musique Concrète

shifts of 1/4 (2/4 3/4 4/4), 3/4 (6/4 9/4 12/4)


and 5/4 (10/4 15/4 20/4) tones and +- 1/8 tone for each
CFGBD = 6000 6500 6700 7100 7400 (midicents)
Keeping C fixed:
+4/4 +-1/8 +12/4 +-1/8 +20/4 +-1/8
D = 7400 +175-225= 7575-7625 +575-625 =7975-8025 +975-1025=8375-8425
+3/4 +-1/8 +9/4 +-1/8 +15/4 +-1/8
B = 7100 +125-175=7225-7275 +425-475 =7525-7575 +725-775=7825-7875
+2/4 +-1/8 +6/4 +-1/8 +10/4 +-1/8
G = 6700 +75-125=6775-6825 +275-325=6975-7025 +475-525=7175-7225
+1/4 +-1/8 +3/4 +-1/8 +5/4 +-1/8
F = 6500 +25-75 =6525-6575 +125-175=6625-6675 +225-275=6725-6775
C axis 6000 6000 6000

Keeping F fixed at 6500 midicents:


+3/4 +-1/8 +9/4 +-1/8 +15/4 +-1/8
D +125-175=7525-7575 +425-475=7825-7875 +725-775=8125-8175
+2/4 +-1/8 +6/4 +-1/8 +10/4 +-1/8
B +75-125=7175-7225 +275-325=7375-7425 +475-525=7575-7625
+1/4 +-1/8 +3/4 +-1/8 +5/4 +-1/8
G +25-75 =6725-6775 +125-175=6825-6875 +225-275=6925-6975
F 6500 6500 6500
-1/4 +-1/8 -3/4 +-1/8 -5/4 +-1/8
C -25- -75=5925-59756 -125-175=5825-5875 -225-275=5725-5775

Table 1. Horn Chaconne calculations: examples with C and F as pedal tones

suddenly reversed direction at maximal speeds, then slowing down. This entire process
was repeated several times. Though this 3-D process cannot be recreated in a stereo mix-
down, it comes across quite effectively with a diffusion of eight loudspeakers encircling
the audience.
The ambience of the entire piece is ”drenched”: use of water samples, a contour for
the entire work which ascends slowly from a low pedal then sinks back down again, use
of rapidly moving filtered-sounds which lack high-frequency content, as if played and
heard ”underwater,” and spatialization which surrounds and engulfs the audience. To
some listeners the final section may seem like an unexpected shift in musical material,
but to the composer these sounds suggest having finally reached very deep waters - as
some listeners have remarked - outer space (!).

201
Ketty Nez
Composer/pianist Ketty Nez completed in 2002-3 a res-
idence of several months at the École Nationale de
Musique in Montbéliard, France, where she worked with
faculty and students on projects of live electronics and
improvisation. Her chamber opera An Opera in Devolu-
tion: Drama in 540 Seconds was premiered at the 2003
Festival A*Devantgarde in Munich. New projects include
commissions for various ensembles in France and at the
University of Iowa, where she currently teaches as Visit-
ing Assistant Professor of composition and theory. This
fall she joins the faculty at Boston University.
In 2001 she was a visiting composer at Stanford University’s Center for Computer
Research in Music and Acoustics (CCRMA), and in 1998, she participated in
the computer music course at the Institute de Recherche et Coordination Acous-
tique/Musique (IRCAM) in Paris. Prior to her studies at IRCAM, she worked
for two years with Louis Andriessen in Amsterdam, where she co-founded the
international contemporary music series Concerten Tot and Met.

Her music has been played at festivals in the US as well as abroad, including
Bulgaria, England, Finland, France, Germany, Holland, and Japan. She spent
the year 1988 in Japan, studying with Michio Mamiya and writing for traditional
Japanese instruments. She has participated as fellow in the Aspen Music Festival
(in 2001, 1991, and 1989), the 1998 June in Buffalo Festival, the 1997 Britten-Pears
School Composition Course (Aldeburgh, England), the 1996 California State
University Summer Arts Composition Workshop, the 1995 Tanglewood Music
Center, and the 1990 Pacific Composers Conference (in Sapporo, Japan).

Her education includes a doctorate in composition from the University of


California at Berkeley, a master’s degree in composition from the Eastman School
of Music, a bachelor’s degree in piano performance from the Curtis Institute of
Music, and a bachelor’s degree in psychology from Bryn Mawr College.

203
STRETTE
- Hèctor Parra -

Abstract. Strette1 is a 14 minute monodrama for soprano, live electronics, lighting


and real time video, based on the poem by Paul Celan Engführung. The intended effect
of the piece, in which the various elements of the show (sound, human voice, text, image
and scenography) are processed, and relate to each other, in such a way that the dramatic
nucleus is constituted by the sound flow itself, is to immerge the public in images and
in psycho-acoustic space. This may result in a polyhedral perception of the content of
Engführung.

In the pages that follow I will attempt to explain the specific issues and problems we
tackled while composing Strette. An important role was played by the Computer Assisted
Composition Program OpenMusic in the development of the musical structures and the
vocal score.

***

1 Dramatic and musical issues


1.1 The interplay between language, poetry and vocal music
Engführung by Paul Celan is a vivid and poetic example of the restoration of the Ger-
man language after it had been perverted by the Nazis. The poem does not describe a
reality; it is the text that constitutes the ’reality itself’. Consequently, there is no room
for mimicry nor for the representation of a reality lying outside the language. This note-
worthy characteristic of Celan’s poem makes it possible to establish direct links between
music and text at a very basic language level. The music benefits from a greater degree
of autonomy, allowing for a less destructive treatment of its acoustic and syntactical
identity than is usual in vocal works. We could say that the primary goal of Strette is
the recovery of the tragic ethos through a more abstract dramatization than those cus-
tomarily afforded by declamatory performances. If the experience is a moving one, it is
the result of the tension between the temporal sound flow created by the music (vocal
and electronic) and the simultaneous interacting flux of visual images.

1 Strette was composed during the Cours de Composition et d’Informatique Musicale de l’IRCAM

2002-2003 and premiered by the soprano Valérie Philippin at the IRCAM’s Projection Space on October
15th 2003. The teaching assistant was Mikhail Malt and the video assistant was Emmanuel Jourdan.
Benjamin Thigpen, Jean Lochard and Mauro Lanza helped with the electronics. I would like to thank
them for their confidence and for their help in creating Strette.
This work was sponsored by a grant from the Departament de Cultura of the Catalan Government. I
would like to thank Josep Manel Parra for his wise advice and his help in creating the present text.

205
Hèctor Parra

The first step was to develop a style of vocal writing that, while maintaining reasonable
contact with the rhythmic and declamatory characteristics of the poem, would be strictly
based on organizational musical principles.
The next phase, linking music and image (shape and colour) was accomplished in
writing a vocal and instrumental script according to principles analogous to those of the
colour theory in oil painting. Greatly inspired by Cézanne’s Château noir a set of patches
in OpenMusic was constructed, allowing me to compose along strongly gestural lines.
Finally, with the help of the program Max/MSP-Jitter, certain image-based control
procedures were carried out in realtime, enabling live interaction between the soprano’s
singing face and the music.
In a nutshell, a certain amount of hard work using the above mentioned computer
programs resulted in a set of new questions and opened new paths in the search for more
direct (i.e. non-metaphorical) relationships between acoustic and visual thought, and
sensory experiences.

1.2 Dramatic role of the image. Communication between sound


and image
The intention was that the public would experience as absolute reality the interaction
between the text, the sound and the transformed image of the soprano. In accordance
with the essentially open nature of Celan’s poem, the soprano does not give a theatri-
cal performance. She simply sings. The drama, which lies within the sound itself, is
simultaneously developed and exposed in the visual domain. A veil, which together with
the lighting, constitutes the only scenery, acts as a kind of resonating membrane for
the action in the piece. The projection of the video-image of the soprano transformed
in realtime with Max/MSP-Jitter creates technology-driven communication between the
singer and the public.

Figure 1. The stage

Strette ends much in the same way as it begins. But for the listener there is a striking
difference - at least there should be! After having watched the dramatico-musical piece
he or she should have gained a more acute and distinct perception of the musical, textual
and spatio-visual material, as well as a vivid awareness of having been in a unique global
communication space.

206
STRETTE

2 Working Strette’s vocal part

2.1 Starting points: Engführung and Château Noir


The structure and articulation of the vocal discourse, comprising not only its main divi-
sion into sections but also the rhythmic and interval sequences, are based on a twofold
source. The first one is obviously the poem itself, with its nine sections and multiple sub-
sections. We have tried to respect the semantic and syntactical elements of each stanza
and each verse. This was necessary in order to achieve a meaningful musical development
of the drama expressed by Celan.
The other source was Château Noir (1904); an oil painting by Cézanne that was
owned by Picasso and that now hangs in the Picasso Museum in Paris. I believe it
was the perception of a deep structural analogy between Celan’s impact on language
through the poem and a ’pictorically guided’ two-dimensional visual intended to convey
Cézanne’s impact upon form and colour that make up the basis of this work. I believe
this is important in understanding the present essay.
Thus, Strette was conceived as a sequence in which the rhytmic and interval ma-
terials are subjected to driving forces and structural tensions that follow aesthetically
significative colour relationships. With this in mind, we began to search for quantifiable
relationships between spaces that would parametrize the music or sound phenomena and
perception on one hand, and the spaces that parametrize colour vibrations and modula-
tions on the other. These relationships, which we will consider in detail, were a way of
gaining control over the rhythm and the intervals, which were intended to be a flexible
representation of the poem’s rather strict syntactical and semantical character. The anal-
ysis of the poem Engführung by Peter Szondi and Werner Wögerbauer proved a useful
guide, and provided valuable ideas for controlling the dramatic flow and achieving proper
coordination.
In Château Noir, after a strong initial blue-orange polarization, the light is progres-
sively broken down into its spectral components. At the same time, because of the
emerging gray scale tones, and the tension generated by the diminishing colour, we begin
to penetrate the various levels of the painting. It is as if the modulation allows us to
bridge the chasm between the initial extreme blue and orange tones. However, a feeling
of tension is generated by the conflict between this depth and the bi-dimensional char-
acter of the surface, a character that Cézanne strongly reinforces by means of abstract
patches.
We subjected the rhythmic and internal parameters to temporal formalisation, in ac-
cordance with the above described chromatic path taken by [the treatment of] Cézanne’s
painting. The guiding idea was that because of its specific musical nature, the rhyth-
mic and interval characteristics of the vocal line could act upon the listener’s hearing
in a similar way to the effect of the colour modulations of Château Noir upon visual
perception. The tool used to implement this idea was OpenMusic.
We must stress that only a fraction of the colours used in the realtime modification
of the soprano’s video-image actually correspond sequentially to those that have were
used in structuring the vocal part. We believe that this process, carried out via the
Jitter/MaxMSP program, shows up the structural linkages that were created between
sound and colour.

207
Hèctor Parra

Figure 2. Château Noir chapaeu dans chateua by Paul Cézanne

2.2 Formalisation of the relationships colour-rhythm


and colour-pitch. Implementation in OpenMusic
I should mention that the expressed aim to connect and develop parallel lines in colour
and sound spaces does not imply any confusion of identity or actual mixing of the two
spaces - I am aware that each has its own distinct character and behaviour. Rather
the idea was to take advantage of the qualities of the colours and their effect on our
perceptual and cognitive processes in order to create a set of pitch and duration values.
The latter, based on clear and effective organisational principles, turned out to be able
to accommodate a considerable degree of musical variation and richness.

Colorimetric system used as a basis for the formalization process. Data taken
from Château Noir
The colorimetric system is founded upon the three perceptual colour parameters: Hue-
Saturation-Intensity. HSI lies at the basis of my formalization of rhythmic and interval
thought, which may be subject to a parallel (and even synchronous) development. The
three dimensions of HIS space are usually represented by a solid cylinder with luminosity

208
STRETTE

along the vertical axis, saturation along the radial coordinate and Hue the along the
angular one. Each colour is then represented by three coordinates or numerical values:
the specific colour tonality (from 0 to 360), and saturation and intensity from 0 to 100
(figure 3).

Figure 3. Hue-Saturation-Intensity

It is obvious that the painting possessed many other interacting dimensions specifically
related to colour and plasticity as, for instance, texture density, opacity and directionality
of the brushstroke (fundamental in Cézanne), the shapes and kinds of colour patches,
etc. They are very difficult to analyze and cannot be manipulated in a simple way, even
using a computer.
After a number of visits to the Picasso Museum for detailed study of Cézanne’s
masterpiece I was able to gain some ideas for my visual path through the painting, as
well as an understanding of which colour patches were interacting at any given moment,
and to what degree of intensity. I then carried out a temporal articulation of the chosen
sequence of interacting patches, assembling them in little groups which could be made
to correspond to each of the stanzas of Celan’s poem. This segmentation process was in
no way a straightforward mechanical one. Each segment had to possess its own pictorial
sense and, at the same time, share some semantic or syntactical characteristics with the
corresponding stanza. The central part of the poem provides an illustration of this. The
language possesses a highly detached character and the two worlds, in opposition at the
beginning, are reconciled at the end. The pictorial equivalent can be found in the full
spectral breaking down of the light after the extreme initial blue-orange polarization.
Although it is subjective to draw parallels of this kind, we are convinced they are often
at the very root of creative artistic activity. Naturally Celan’s poem has its own very
special and suggestive qualities; as Wögerbauer has noted, ”Tout au long du Engführung
la création poétique est analysée dans une succession d’étapes au sein d’une synesthésie
générale”2 .

2 Wögerbauer, Analyse de Strette, 1991.

209
Hèctor Parra

With this in mind I proceeded to choose, for each colour patch in a digitized image
of the Châteaux Noir, those pixels that seemed to me as close as possible to the colour
I had experienced at the museum. Then I evaluated the colorimetric mean of each zone
that seemed to be of relevance during my visual path or pictorial reading of Cézanne’s
work.
The network of HSI indices obtained in this manner was the expression (albeit a
partial and oversimplified one) of the successive strains and stresses I experienced while
looking at the painting. This data was now ready to be used as a starting point for the
set of musical patches that we present below.

Colour to rhythm: formalisation and implementation into OpenMusic

In accordance with the overall plan of the piece, I designed a rhythmic space based on
small units, each of them possessing its own temporal identity. Each has a minimum
of three attacks and a maximum of seven. Each is associated with a colour, and the
vertical polyphonic interaction between them had to produce a dynamic tension similar
to that generated by the visual clash of colours. A way could now be found to translate
Cezanne’s pictorial rhythmical structures and tensions into musical language. We outline
below how this translation from the colorimetric to the musical space generated a rich
basic material appropriate to the original conception of the piece.
Each of the rhythmic units comprises two superimposed cells of attacks, one in pro-
gressive accelerando and the other in progressive rallentando. Each attack is triggered
by a fixed discrete amount of an exponential (for the accelerando) or logarithmic (for
the rallentando) curve that is characteristic of Hue and is defined as Saturation=100.
The number of attacks for each rhythmic unit was kept to a minimum to maintain the
character of each cell. The colours in the blue-orange axis, which constitute the pillars
of Cézanne’s painting, were assigned either seven or eight attacks while other colors were
assigned six attacks. With decreasing saturation these functions tended to become linear,
producing regular attacks in such a way that grays corresponded to complete regularity.
The upper or first cell in time has its maximum acceleration at yellow, and maximum
rallentando at magenta. The lower or second cell is opposed in character. The opposi-
tion is reduced for the flat complementary colours red and green in which one of the cells
presents equally spaced attacks. These characteristics are shown in figure 4.
The size ratio of the upper to lower cell is also fixed as a function of Hue for each
value of saturation. It ranges from 1/10 at yellow to 5/1 at violet. Again the relative
size tends towards value 1 (uniformity) with decreasing saturation, as is reflected in a
non-quantified manner in figure 4. The delay between the upper and lower cell is also a
characteristic function of hue that tends towards zero with decreasing saturation in a hue-
dependent way: hot colours decay more quickly than cold ones, in accordance with colour
perception theory. Flat colours (red and green) always present zero delay (simultaneity).
Orange, yellow and violet, that give pictorial depth can achieve a maximum delay of a
20% of the size of the first cell. Again this fact has been qualitatively reflected in figure
4.
Finally the physical extension of the time of each rhythmic unit is also a function
of hue and intensity. At the maximum intensity of 100, yellow-orange colours have a
maximum duration of 6 and 5 seconds, respectively. Their complementary colours violet
and blue are given the minimum duration of 1 second. With decreasing intensity this

210
STRETTE

Figure 4. Rhythmic units

distribution of duration values becomes almost inverted, in accordance with the visual
rhythmic tension in Cézanne’s painting: in the brightest zones blues act as accents or

211
Hèctor Parra

activators of the more extensive orange patches while in the dark regions blues become
dominant and profound, corresponding to rhythmic units of greater duration and fewer
attacks. In fact, the displacement towards the dark regions of less ’luminic vibration’
corresponds to the disappearance of a part of the rhythmic content, specifically those
attacks that by their closeness make the rhythmic unit more vibrant.
Figure 5 is a simplified outline of how the rhythmic patch generates and utilises the
above-mentioned variables. The example contains the rhythmic unit that corresponds
to the colour orange (hue=30), saturation of 50% and Intensity of 80%. The as yet
unquantified intervals between attacks are given in milliseconds.

Figure 5. Construction of the rhythmic unit corresponding to the colour (H=30, S=50, V=80)

212
STRETTE

Thus, as shown in figure 6, the patch that gives the rhythmic variables consists of
three main parts. First, the three entries corresponding to the colorimetric data in
the Hue, Saturation and Intensity codification. This input is represented by the three
upper arrows. Then there is the complex subpatch network that sequentially implements
the transformation of a colour path into a rhythmic path. Finally there is an effective
construction of the rhythmic units expressed in the form chord-seq, parametrized in
milliseconds, and represented by the bottom arrow.

Figure 6. The rhythmic patch

Figure 7 is an example of the rhythmical musical tensions generated by the opposing


colour pairs orange-blue and red-green, and the rhythmic sequences that result from
the superimposition of the two corresponding rhythmic units. In the orange-blue case,
the perception of extreme closeness and depth requires characteristic time values that
amount to a deformation of the uniform flow of time. In musical terms these irregularities
take the form of an initial energy propulsion (initial blue attacks) followed by a central
development (orange and blue together), suddenly stopped by a second propulsion (blue
ends) that gives way to the final expansion (orange alone). In the red magenta-green
case, the pictorial flat colours, we have the superimposition of two regular patterns of
different durations that express less energetic vibrations, and start at the same time.
These interaction types between rhythmical units take place in the maquette.

213
Hèctor Parra

Figure 7. Two basical rhythmic sequences

From colour to pitch

In parallel to the colour-rhythm association, I developed a system of patches in OpenMu-


sic that, for each colour, give an aggregate of pitches or ’chords’ consisting of a maximum
of eight pitches and a minimum of one. It is the interval relationships that are impor-
tant here; and somewhat less important, the absolute value of the frequencies. The role
of these pitch-groups is to provide a strongly characteristic and differentiated harmonic
colour, even in a monodical disposition, as is the case for Strette.
As the starting point for the computation of the chords I took the first 32 partials of
a harmonic series whose fundamental pitch I altered several times in accordance with the
formal structure devised for Strette. Thus, to the three continuous colorimetric indexes
is added the fourth - external - parameter of a discrete and intrinsically musical nature
(figure 8).

214
STRETTE

Figure 8. Patch for the pitch

215
Hèctor Parra

Again, as in the case of rhythm, it is the Hue parameter that plays the dominant role
in determining the idiosyncrasy of each pitch aggregate, defined at maximum saturation
and intensity by means of the sub-patch represented in figure 9.

Figure 9. Sub-patch for the partials (hot colours)

Decreasing these parameters will modify the aggregate by reducing the number of
elements and by homogenising the interval ratios. As we can see in the upper part of
figure 8, small displacements of the fundamental of the harmonic sequence are carried
out in such a way that the colours close to the yellow-violet axis are raised a half-tone,
while those close to the green-red axes are lowered by a half-tone. This simple procedure
is an attempt to avoid pitches belonging to chromatically opposed spaces coinciding too
often.
In figure 10 we see how the two named functions ’virtual-fundamental’ and ’best-
frequency’ belonging to the OpenMusic library Esquisse provide some of the pitches that
will be used for a hot color and its complementary cold one. Together with other partials
from the upper lines of this lower section, the ’cold colour’ aggregates will attract and
acoustically complement those that correspond to the complementary ’hot colours’.

216
STRETTE

Figure 10. Sub-patch for the partials (cold colours)

As saturation decreases the pitch aggregates lose notes until they are reduced to a
single tone for saturation values lower than 20. The ’frequency-distortion’ function of
Esquisse generates a reduction or compactification of the original range of the spectrum.
It transforms the original irregular intervals defined in eighths of a tone at saturation
100 into regular interval progressions. For saturation between 50 and 75 the resolution
is reduced to quarter tones and to half tones for saturation values below 50 per cent.

To sum up, the progressive loss of saturation that describes the progression to gray
is made to correspond to a process of homogeneization by reduction of the elements that
characterize the idiosyncrasy of the initial chords. In figure 10 this process is represented
for hue = 60o , intensity = 100 and saturation values of 100, 75, 60, 50 and 20.

Finally, the decrease in the intensity parameter also reduces the number of notes as
well as the range of the spectrum. However, in this case concentration does not take
place around the central pitch of the chord, but rather in the higher frequencies in the
case of the hot colours, and in the lower frequencies in the case of the complementary
cold ones.

217
Hèctor Parra

Figure 11. Chords corresponding to successive changes of 15o in Hue, at the maximum level
of Saturation and Intensity

Figure 12. Progressive decrease of the saturation for Hue=60 and Intensity=100

2.3 Final steps towards the score


The maquette

For each visual phrase of my aesthetic perception of Cézanne’s Château Noir I constructed
a maquette timeline along which I laid out the small rhythmic units corresponding to
each colour present. Each maquette covers a 10 to 30 second timespan. The particular
spatial layout of the different rhythmic units is inspired by the functions played by the
corresponding colours, as mentioned above.
On the right side of figure 13 can be seen the rhythmic patch analyzed above, inte-
grated into the maquette, as well as the list of colours that constitute its input. I think
this procedure is sufficient to accommodate a strong and flexible interaction between the
output of the OpenMusic patches and the composer’s musical requirements.
After transferring the maquette’s contents to the OpenMusic multi-sequence midi
editor I carried out simultaneous filtering of the sequence to suppress all those attacks
that seemed to me musically uninteresting, and also to implement the pitch corresponding
to each colour in what was in fact the pre-composition of the vocal line. This work was
done at the same time as the first manuscript drafts of the score and while thinking about
the best means of processing the text.

218
STRETTE

Figure 13. Maquette corresponding to the first 30 seconds of Strette

The process of rhythmic quantification


The quantification of the musical phrases was carried out using the rhythmic quantifier
provided by the library OMKant 3 . The figure 14 illustrates the three stages of this
process:
1 Marking the sequence at those points where I want to start a measure or place a
strong beat.
2 Triggering the quantification process, using the tempo I consider most appropriate
at this point of the piece and which, at the same time, offers a quantification rather
closer to that of the original sequence.
3 Displaying the result in the form of actual rhythms, with a fixed tempo applied to
certain measures.
It is necessary to stress that the specific characteristics of each phrase make this
process different each time. There is no routine automatic processing. For instance,
although I used the OMKant object to force the choice of rhythms that satisfied me
more than those resulting from the quantification process of certain beats, I nevertheless
strove to adhere to the derived material.
The final creation of the fragment quantified in figure 14 is shown in figure 15. It
is obvious that certain of the rhythmic characteristics proposed by OMKant have been

3 The OMKant library was conceived and programmed by Benoit Meudic.

219
Hèctor Parra

Figure 14. Quantification with the OMKant library

transcribed. In general their gesturality has been stressed and adapted to the text by
combining various dynamic progressions and structured silences. In other places, e.g. the
last part of the second measure, the entry point of the second articulation was consider-
ably modified.

Figure 15. Exemple from the score (measures 18 and 19 of Strette)

220
Bibliography
[1] Carpenter J. and Howard F.: Color in Art. Fogg Art Museum, Harvard University,
1974.
[2] Gowing L. : Cézanne: La logique des sensations organisées. Éditions Macula, 1992,
Paris.
[3] Itten J.: Art de la Couleur. Dessain et Torla /VUEF 2001.
[4] Lhote A. and Howard F.: Traités du Paysage et de la Figure. Bernard Grasset
Éditeur, Paris.
[5] Machotka P. : Cézanne, Landscape into Art. Yale University Press – New Haven
and London, 1996.
[6] Meudic B.: Librairie OMKant 3.0. Edited by Karim Haddad. IRCAM, Paris 2003.

[7] Montchaud R.: La couleur et ses accords. Éditions Fleurus, 1994.

[8] Szondi P.: Études sur Paul Celan: lecture de Strette, dans « poésies et poétiques de
la modernité ». Presses Universitaires de Lille, France.
[9] Wögerbauer W.: Analyse de Strette. 1991.

Hèctor Parra
Hector Parra is a Catalan composer, born in
Barcelona in 1976. He studied music at the Conser-
vatori Superior de Musica of Barcelona (composition
with Carles Guinovart and Davis Padrós, piano
with Jesus Crespo) where he was awarded Honour
Prizes in Composition, Piano and Harmony. He has
taken active part in numerous workshops for young
composers, among them those at Royaumont 2001,
Takefu 2002 in Japan and Centre Acanthes 2002 and
2004. In 2002-2003 he followed the Composition and
Computer Music Courses at the IRCAM, where he
was taught by Brian Ferneyhough, Jonathan Harvey, Philippe Hurel, Philippe
Leroux, José Manuel Lopez Lopez, Mikhail Malt, Philippe Manoury, Tristan Murail
and Brice Pauset.

His compositions have been played at the international festivals of IRCAM


Résonances, Festival d’Avignon, Royaumont-Voix Nouvelles, Elektronische Nacht
(Stuttgart), Madrid-CDMC, Mallorca, Ensemble Intercontemporain Tremplin at

221
Hèctor Parra

the Centre Georges Pompidou, Maison de la Danse de Lyon, and also broadcasted
by France Culture, France Musique and SW-2 (Germany). Recently he was invited
by the Stuttgart Opera House to give two workshop-concert sessions in the Forum
Neues Musiktheater based on his monodrama Strette, and he has taken part in
a composition stay with the Youth Ballet of the CNSMD Lyon as well as at the
IRCAM. Several shows are scheduled in France, Hungary and Yugoslavia for 2005.
His pieces have been premiered by the Ensemble Intercontemporain, the Arditti
String Quartet, the Ensemble Recherche, Holland Sympfonia, the Duet Nataraya,
among others.

He was finalist at the Gaudeamus International Composition Competition


2005. In 2002 he obtained Composition Prize at the INAEM (Spanish National
Institute for the Scenic Arts and Music) - Colegio de España (Paris). He was
awarded scholarships from the Foundation ”Agustı́ Pedro i Pons” at the Barcelona
University (2001) and from the Catalan and Spanish Ministries of Culture (for
a DEA in the Paris-VIII University in computer-assisted composition, under the
direction of Horacio Vaggione). Since January 2005 he has been research composer
at the IRCAM in Paris.

222
Klangspiegel
- Luı́s Antunes Pena -

Abstract. This article is based on the piece Klangspiegel for quarter-tone trum-
pet, tam-tam, and tape. It describes the use of the computer and OpenMusic using two
different approaches: a) the spectral domain - the use and manipulation of a sample’s
analysis data; b) the combinatorial - the interpolation process and rules dependent rela-
tions between pitch, dynamic, and rhythm.

***

1 Introduction
Acoustic reality – The beauty of the inner

”I have sought for myself”


Heraclitus

Modern tools of analysis allow us to explore a sound’s micro-time domain. We can ex-
tract specific information from a sample, and reduce the individual sound - a complex and
somehow intangible entity - to a collection of concrete data. This information, represent-
ing only a limited reality of the sample, can then again be transformed back into sound
via electronic resynthesis, creating a virtual image of the original sample. The result of
this transformation, retaining an instrumental aura, is neither a copy, a duplication nor a
simulation of the original, but rather a new entity. Through instrumental resynthesis of
the original sample, we are then able to create another image, an introspective one: the
information which has been extracted from the original sound, analysed and converted
into pitches and dynamics is then played back by the trumpet, thereby projecting the
previously imperceptible inner structures of the sound into the audible domain.
Thus, the instrumental and electronic resynthesis enables two interpretations of a
sound sample’s interior based on the same analysis. The piece Klangspiegel 1 (2002)
for quarter-tone trumpet with tam-tam and 4-channel-tape, brings both representations
to the foreground, and plays with the images that arise from the resynthesis: Virtual,
Sound, Imaginary and Memory, corresponding to the four movements Klangspiegel I (for
tape), Klangspiegel II (tape and trumpet), Klangspiegel III (trumpet) and Klangspiegel
IV (trumpet and tape).

1 Sound Mirror.

223
Luı́s Antunes Pena

2 Virtual mirror
2.1 Inside the trumpet
The mirror illusion is created through the resynthesis of various trumpet samples, some
of them using breath sounds or ”normal”, but extremely short, tones with very unsta-
ble partials. The fact that ideal samples (having stable partials) for the analysis were
deliberately avoided reinforces the idea that the resynthesis in this piece is not simply
a reproduction of the original. At the same time, the autonomy of the image increases
in proportion to the distance between it and the object from which it was generated.
The mirror is thus a transformed image, which presents an interpretation of the original
object. This interpretation is dependent on the nature of the analysis techniques used
(limited by the technical resources at our disposal) and on the meaning imprinted upon
this data (compositional decisions).

Figure 1. Patch used for the instrumental and electronic resynthesis of a sample

2.2 Manipulating analysis data - frequency and amplitude filter


When working with analysis data of a sample we are confronted with two problems: the
large amount of information and its musical relevance. The analysis, in this case the result
of a Fast Fourier Transformation (FFT), rapidly generates information in such quantities
that we are able to work with it only with some difficulty - for each small portion of time
(about 20µs), there are usually 1024 amplitude/frequency pairs. Furthermore, to use
this information in a compositional context, we have to interpret the results of analysis

224
Klangspiegel

which have thus far been neutral, musically meaningless. Selection is then a necessary
means not only to make it humanly possible to deal with such a quantity of data, but also
to give sense to an abstract collection of data. Here, the computer - and particularly the
programming - can play an important role, by automating some aspects of the selection
process. The reduction (transformation of the whole into smaller parts) and sorting
(creation of hierarchies) effected by the computer are two possible ways to manipulate
large-scale data.
Considering a group of data as one entity is one way to reduce information. A first
criteria would define boundaries to limit the information within a fixed range, while a
second would create groups selecting the information with the help of a fixed rule. With
these two selection principles we are able to use the same information in two different
contexts, creating differing views of the same entity (figure 2). Criteria 1 corresponds
to the traditional Bandpass Filter, selecting frequencies within the given limits, whereas
criteria 2 is used in this piece to create a spatial division, where a, b, c and d correspond
to speakers 1, 2, 3 and 4. Figure 3 represents the first 60 seconds of Klangspiegel I. Here,
the bandpass filter selects 13 bands dividing the frequency spectrum of sample 1 between
320 and 3840 Hz, with each of the frequency bands having a different envelope.

Figure 2. Selection criteria for the FFT data

The third possibility to select large scale data is to sort the information using criteria
which form a new order of the whole, that is to say, to scale the data between two opposed
poles, from small to big, from high to low. This is the case of the amplitude filter. Here,
the frequencies selected were those which were the loudest (figure 4).
Another way to select large scale data would be to sort the information with one
criteria to achieve a new order of the whole, that is to say, to scale the data between two
opposed poles, from small to big, from high to low. This is the case of the amplitude
filter. Here, the frequencies selected are the strongest.
The third possibility is the one that does not depend on a pre-defined rule or criteria,
but on an external factor in a specific temporal context. This is a complex selection

225
Luı́s Antunes Pena

Figure 3. Klangspiegel I, 0-60 seconds

Figure 4. Klangspiegel II. The tones selected from the Amplitude Filter of sample 1

method once it does not depend on any fixed rule, but it depends on the development of
the musical discourse: what comes before and what follows. This selection method could
be called organic criteria.
With the two filter types we are then able to reduce the 1024-amplitude/frequency
pairs of the FFT to usable data opening the possibility to differentiate it in a qualitative
way. The use of the frequency filter in the first movement allows the whole spectrum
of sample 1 to be divided in 13 frequency bands, each one with a different envelope.
Consequently, every frequency band will appear in a particular order in relation to the
others creating this way an internal melodic movement that mirrors the 13-degree-shape
Gestalt (figure 3 and 5).
On the other side, the amplitude filter selects frequencies from samples that will be
interpreted by the trumpet as tones. During the last three movements different samples
are brought to foreground by the trumpet: Klangspiegel II uses selected tones from sample
1 (melodic sequence D); for Klangspiegel III three sequences from sample 3 are extracted
- sequence A, B, C, whereas Klangspiegel IV incorporates all the former sequences and
adds sequence E (figure 9).

226
Klangspiegel

3 Imaginary mirror
3.1 Interpolation 1
The role of the computer in Klangspiegel is not restricted to the interpretation and mod-
ification of the micro-time domain (analysis and manipulation of spectral data). Its
combinatorial potential is also present through the use of interpolation. This process is
used to create a fixed number of equal steps between two different states. For example,
the linear interpolation of the numbers 2 and 10 in 5 steps would be 2 4 6 8 10. This pro-
cedure can be applied to a melodic sequence to create a complex network of movements
where several interpolations occur simultaneously, a number of strands overlapping one
another. A further particularity of the interpolation process is its application to the time
or pitch domain; both cases will give us the same final result, but the steps between the
two poles (snapshots of the ongoing process) are quite different (figure 6). In the for-
mer, the movement of each tone in the temporal space presents us with a new order and
rhythm, while the latter presents a purely melodic transformation - with no alterations to
the rhythmic character, but with a new pitch constellation. Applying the interpolation
to the time domain, or to the pitch domain, we perceive each variation as a continuous
transformation process of the previous interpolation (without perceiving any break be-
tween them), as a variation of the original sequence, or as a new entity, depending on
the type of the movement (direction and velocity) and on the duration of the pattern.

Figure 5. Gestalt and Gestalt mirror

Figure 6. Pitch vs. rhythm interpolation between Gestalt and its mirror

Time domain interpolation


During the whole first movement there is a time domain interpolation of the shape Gestalt
with its mirror. The internal movement of sample 1 changes gradually the melodic

227
Luı́s Antunes Pena

order of the frequency bands, i.e., the order of the highest point of the envelope each
frequency band, which reflects Gestalt and Gestalt mirror. Each step of the time domain
interpolation consequently creates a new rhythm, and the order of appearance of the
frequency bands also undergoes a transformation. In Klangspiegel I the duration of each
interpolation is also transformed, i.e., compressed or increased, and its order changed
(figure 7).

Figure 7. Klangspiegel I. Each figure describes the envelope within the frequency band

228
Klangspiegel

Pitch domain interpolation


In the fourth movement, the tones played by the trumpet are the result of a pitch domain
interpolation between the tone C and 75 steps of a melodic sequence, also starting on C
(sequence E, figures 8). Each resulting pitch sequence runs its own course, starting with
C and ending with B+, B[, D+, E, F, B, C2 . The first sequence is from C to B+, i.e., a
quartertone lower in 75 steps; the second is from C to B[. The largest melodic movement
is that of an interval of a fourth between C and F at the fifth tone in the sequence. This
long process results in a very static field that resembles the opening of a bandpass filter’s
bandwidth.

Figure 8. Pitch interpolation

iNtErPoLAtIoN2

As the composition of the work progresses, the tones extracted from the FFT analysis
acquire increasing autonomy, opening a further dimension. The focus shifts from the
spectral transformations to the note-to-note domain and its combinatorial possibilities.
Five melodic sequences are then extracted from different samples and form the main
pitch constellations of the last two movements. In addition, the idea of reflection has
been expanded, creating a new perceptual level through the dynamics: the mirror is
now present in the melodic sequence through a dynamicly-varying relief that shapes the
sequence by accentuating all its quartertones (figure 9).
Consequently, the application of this rule instills in the melodic sequence a particu-
lar type of interdependence of pitch and dynamics, whereby each sequence has its own
’internal rhythm’ (the ff rhythm linked to the the quarter-tone pitches), which may be,
or not, exteriorised, made apparent. From a single melodic line, a number of interior
and exterior structures are rendered apparent: the pitch, the rhythm that emerges from
the melodic contour, and the rhythm materialised through variations in dynamic. In the
following example (figure 10) the same pitch series is repeated three times, each time
with a different dynamic contour: in the first sequence, there are accents (notes with

2 The symbol ’+’ is used here to indicate the raising of the pitch by a quartertone.

229
Luı́s Antunes Pena

upward stems) on all the quarter-tones; the second sequence is a quasi-inversion of the
dynamic contour of the first; and in the third sequence, the accents occur on only one
pitch.

Figure 9. Melodic sequences A, B, C, D, and E. Dynamic/pitch relation

Figure 10. Klangspiegel III, bars 128-133

In a further development of what was already implicit by means of the distinction


between notes having and not having quartertone inflections, the same idea - the internal
sub-structures - is expanded through an interlocking of two or more melodic sequences.
This procedure is, in fact, a different type of interpolation with no steps between them
(in OpenMusic it would be achieved with a matrix transposition of two lists). In bar 27,
the interpolation of two melodic sequences occurs (with slight variations): the 13-note
sequence C with the 11-note sequence D (figure 11).
The same interpolation will appear again in bar 70, but this time the accents are
the result of the interpolation AD (again with slight modifications, see figure 12). The
combination of both structures - the pitch sequence CD and the rhythm AD - creates a
new melodic structure that emphasizes both quarter tones and 12-tone equal-tempered
tones.
At the end of the piece a new interpolation passage emerges containing three of the
melodic sequences already played in the former movements (figure 13): D (all the notes

230
Klangspiegel

of the second movement), C (from the third movement) and E (sequence appearing in
the fourth movement). Here, in addition to the already-mentioned inner structures - the
pitch, the ’quarter tone rhythm’, and the dynamic itself - a further temporal structure
emerges through a ritenuto applied to the quartertones (upward stems).

Figure 11. Klangspiegel III, bars 27-38

Figure 12. Klangspiegel III, bars 65-77

Figure 13. Klangspiegel IV, bar 76

4 Memory mirror
4.1 Inner connections
The reinterpretation of the mirror idea leads to a reflection upon the time axis of par-
ticular aspects of previously-occuring elements (single notes or rhythmic events, phrases
or whole sections). A certain amount of continuity in the composition is assured by
allusions to these elements, in the form of partial repetition. In this context, the role of
the computer is one of an interface between different musical representations of abstract
structures, assisting in bringing into relief otherwise inaudible structures and thereby
bridging the electronic and the instrumental worlds.

231
Luı́s Antunes Pena

Memory mirror 1 - Rhythm structure

In the first movement, the resulting rhythmic proportions of all 13 voices, i.e., the sum-
ming of each frequency band’s highest amplitude point, build a one-voice-framework
which is set into new contexts during the second movement. Both tape and trumpet
reflect this temporal macro-structure with different proportions. For example, the du-
rations of the rhythmic line of the trumpet are a diminution of the tape parts, while
the tape part forms a new structure over the trumpet, but with several of its original
aspects compressed (similar to a fractal structure that mirrors a pattern in different pro-
portions). Each of the tape layers resynthesizes different samples with different settings
for the various frequency bands.

Figure 14. Klangspiegel II, bars 1-9 (samples 1, 2, 4, and 5 use only frequencies 11 to 13, while
sample 3 uses all frequencies 1 to 13)

Memory mirror 2 – Pitch structure

A new section at the end of the second movement connects this and the third movements.
All the FFT information is resynthesized and filtered using a bandpass filter, where the
central frequencies mirror the tones played by the trumpet during the second movement.
The subsequent phrase introduces tones that will be heard in the next movement. Again,
the tone’s durations mirror those of the former section compressed by a factor of two and
four, respectively.

232
Klangspiegel

Figure 15. Klangspiegel II. Section that connects both II and III movements. The Y-axis
represents pitch in Midicents, where 6000 corresponds to middle C, 6100 to C] and 6050 to C
quarter-tone sharp. The lozenges represent the envelope of the sound, while the triangles inside
the lozenges represent the time-varying bandwidth of the filter (the Q)

Dynamic and pitch mirror


In the fourth movement, the pitch and dynamic information of all the 527 tones played
during the previous movement is reinterpreted in the electronic and instrumental parts.
All the information concerning the third movement - the grouping of tones, the duration
of the pauses, articulation, tempi, dynamic fluctuations, use of breath or percussive
sounds - is now reduced to a melodic sequence containing pitch and dynamic data. The
vascillation between two levels of dynamics found in the third movement (f / pp) is
now transformed into a timbral change between senza sord. pp and con sord. p in this
new melodic sequence. Reflecting all the dynamic information extracted from the third
movement, the tones played by the trumpet are the result of an interpolation between
the note C4 and the 7 tone sequence E (figure 8). In the tape part, the fundamental
frequencies played earlier by the trumpet will now define the central frequencies of a
bandpass filter selecting the FFT data. The chosen amplitude/frequency pairs centred
on a specific frequency will be able to distinguish between pitches, each individual pair
having a particular spectral structure and envelope (figure 16).

5 To Conclude
We could resume the use of the double process of analysis and resynthesis in Klangspiegel
in two different ways:

- the spectral manipulations,

- the combinatorial work.

233
Luı́s Antunes Pena

Figure 16. General structure of Klangspiegel IV

The spectral manipulations of trumpet samples and their resynthesis aims to create a
new entity that does not attempt to substitute the trumpet sound, but rather to establish
a spectral fusion between the instrument and its synthetic mirror. This is particularily
evident when the instrument and tape are heard simultaneously. The tam-tam, which
stands unplayed near the trumpetist, and whose surface is excited by the trumpet playing
with its extremely differentiated dynamics, functions as a filter for the trumpet sound.
The timbre of the tam-tam’s resonance is situated somewhere between the sounds of the
tape resynthesis and of the trumpet itself, thereby contributing to the successful blending
of acoustic and electronic sounds.
The combinatorial work in the third movement appears in this context as a natural
consequence of previous sections. The reinterpretation of the analysis information already
used in the tape part in Klangspiegel I and Klangspiegel II makes it possible to use
this data as pitch constellations that are not dependent on interval logic; the semitone
chromatic division of the octave, and its quarter, sixth, and eighth-tone extensions, are
foregone in favour of a thinking which considers frequency from a micro vs. macro
perspective.
Finally, the conscious confrontation between these two aspects of the composition –
the spectral and the note-to-note domain – represents then an irresistible temptation
that the legacy of both serial and spectral music has left us. The démarche of the
spectral composers in the 1970s offered a new way of considering pitch free of the interval

234
Klangspiegel

hegemony that prevailed after WWII. Today, about 30 years after the first spectral
compositions, the attention placed on the sound and its internal spectral structure, timbre
and envelope still affords us an exciting world to exploit, but the ubiquity of the computer
and increasing awareness of its combinatorial potential has opened up a new dimension,
creating bridges between two historically-opposed approaches to composition.

235
Luı́s Antunes Pena
Luis Antunes Pena, composer born in 1973 in Lis-
bon, Portugal. He studied composition with Evgueni
Zoudilkine, Antonio Pinho Vargas and attended the
composition seminars of Emmanuel Nunes in Lisbon so
as various summer courses in Paris, Darmstadt, Berlin
and Brescia. Particularly important was the course
with GÈrard Grisey in IRCAM in 1998. In 1997 he
was one of the creators and artistic directors of the
annual contemporary music festival Jornadas Nova
Música at the city of Aveiro, Portugal (1997-2002). He
went to Germany in 1999 to continue his composition
studies with Nicolaus A. Huber at the Folkwang Hochschule Essen. At the same
time studies electronic music at the ICEM with Dirk Reith and later with G¸nter
Steinke. In 2004 he concluded his composition studies and wrote his dissertation on
Helmut Lachenmann’s Music.

His music has been played in Portugal, Germany, Holland, Sweden and USA.
He won composition prizes at the contests ”Óscar da Silva” and ”Lopes Graça”, at
the 11th Summer Seminar from Vienna, and his music has been distinguished and
selected for the ISCM Festivals in Miami and Stuttgart, and at the 32e Concours
International de Musique et d’Art Sonore Electroacoustiques de Bourges. He was
granted the Rotary Club Scholarship, and between 2000 and 2004 the scholarship
of the Foundation for Science and Technology from the Portuguese Ministry for
Science and Education. 2005/06 he was awarded the ’MozArt 250’ scholarship from
the Jeunesses Musicales Deutschland and the ZKM (Karlsruhe).

Work selection: Anatomia de um Poema Sonoro (2003-2004) for Soprano,


Speaker, Saxophone, Percussion, Piano and Live-Electronic, following texts from
Jorge de Sena and Kurt Schwitters; Sonorous Landscapes I and II (2005) for
Tape; Kippfigur (2004) for Saxophone Quartet; Klangspiegel (2001-2002) for
Quartertone-Trumpet, Tam-tam, and Tape; ...Winterlich ruhende Erde...’ (2000)

for Violoncello; Trajectories (1999) for 12 Instruments.

237
Kalejdoskop for Clarinet, Viola and
Piano
- Örjan Sandred -

Abstract. This article begins by explaining how harmony is built in Kalejdoskop. It


then describes the rhythmic structure and its connection to harmony. The harmony and
rhythm were organized with the help of the OMCS and the OMRC libraries in OpenMusic.
The main part of the article discusses the musical aspects of the piece and only briefly
describes the computer implementations.
***

1 Introduction
Of all my pieces, it is in Kalejdoskop that I have gone furthest in working with com-
puter assisted composition. The piece was composed after completing the OpenMusic
Rhythmical Constraints Library (OMRC ) for OpenMusic. The ideas that triggered the
creation of the OMRC library were used very consistently in the piece.
The choice of a rule based system made it possible to force a rigorous structure
to integrate ear-based decisions. The computer processing is based on building blocks
designed by ear. The larger structures generated by the computer were evaluated with
the ear and could be returned to the computer with instructions for corrections. By using
the computer the structure was kept consistent with my concept even when changes were
made by hand.

2 Harmony
The way we experience harmony depends on two parameters: the vertical chord structure
and the horizontal context. The smallest vertical building block in a chord is the single
harmonic interval. Only four harmonic intervals exist in Kalejdoskop: minor seconds,
perfect fourths, tritones and minor sixths. By using this strict limitation to four harmonic
intervals, a first step is taken in the creation of a harmonic identity. The intervals are
superimposed so as to create more complex chords.
The starting point for the horizontal context is local voice leading. The harmonic
intervals are paired and the voice leading between them is fixed. Since certain harmonic
intervals can only pair up in one way, the horizontal possibilities are limited and the pairs
of harmonic intervals become pieces of a puzzle with which to build longer sequences.
The greater the number of pieces to choose from, the greater the number of variations
that will be possible in the sequences.
Complexity increases significantly when a third voice is added. The middle voice
functions as a lower voice to the intervals between voice 1 and 2, and as an upper voice

239
Örjan Sandred

Figure 1. Four pairs of harmonic intervals. The voice leading in each pair is fixed and can not
be changed

to the intervals between voice 2 and 3. An added third voice will consequently give the
middle voice much less alternatives in its design.
To avoid a dead end when building sequences, the pairs of intervals are designed so
that the lower voice in one pair can work as the upper voice in another pair. In figure 1
the lower voice in the first pair can work as the upper voice in the second pair, the lower
voice in the second pair can work as the upper voice in the third pair, the lower voice in
the third pair can work as the upper voice in the fourth pair, and the lower voice in the
fourth pair can work as the upper voice in the first pair. In this way there is always a
pair of intervals for voice 2 and 3 which has an upper voice that will fit in to the lower
voice of the pair of intervals between voice 1 and 2. Only two different sequences with
three voices can be created from the four pairs of harmonic intervals in figure 1. Both
sequences will contain two different chords that change places (see figure 2).

Figure 2. The two possible sequences of three part chords built of the pairs of harmonic
intervals in figure 1

If the pairs of intervals are allowed to be read both from left to right and from right
to left there is a greater number of sequences possible. To obtain a more varied harmonic
language a larger number of pairs of intervals are needed. In figure 3 all 32 pairs of
harmonic intervals that were used in Kalejdoskop can be found. The upper and lower
voices in a group of four pairs have the same relation as in figure 1.
I tried to avoid minor seconds as a melodic interval when I created the pairs of
harmonic intervals. I often find that the chromatic voice profile has lesser clarity than

240
Kalejdoskop for Clarinet, Viola and Piano

Figure 3. All 32 pairs of harmonic intervals used in Kalejdoskop

larger intervals. Figure 4 analyses the frequency of different melodic intervals in intervals
pairs. It is clear in the graph that the prime is the most common melodic interval followed
by the minor third. Two of the minor seconds and three of the major seconds are identical

241
Örjan Sandred

(10 is identical to 23, and 16 is identical to 20 and 4 if read backwards). All minor and
major thirds, perfect fourths and fifths are unique.

Figure 4. The graph shows how frequent different melodic intervals are in the pairs in figure 3

The pairs of intervals are formalized into a rule in a rule-based system in OpenMusic.
The rule pair only allows for harmonic intervals and the voice leading between them that
can be found among the pairs of intervals. The pairs of intervals can be read both from
left to right and from right to left. Pairs of intervals may be transposed.

Figure 5. The OpenMusic patch that generated the harmony fore Kalejdoskop

242
Kalejdoskop for Clarinet, Viola and Piano

The rule is applied to all chromatic pitches in a range of three octaves. Five voices are
generated. The calculation is carried out by the PMC-engine in the OMCS library (de-
veloped for PatchWork by Mikael Laursen at the Sibelius Academy in Helsinki). Figure 5
shows the OpenMusic patch. On the left is an open window containing. The PMC-engine
generates five-part chords. Every fifth pitch in the answer belongs to one voice.
The above described system is somewhat lacking in global control of the harmonic
development. Ths can be improved by rules that for example force chosen chords at
strategic points in the sequence. In figure 6 the first 24 chords in the piece can be seen.

Figure 6. The first 24 chords in Kalejdoskop

3 Rhythm and form


Different sections of the piece use different strategies for the rhythmic structure. To
simplify, one can say that the piece starts with a minimum of control over the rhythmic
structure and then arrives at a more clearly defined rhythmic language.
The piece can be divided into three sections, with the subdivision of each section
following the same pattern, where two rhythmic structures change places. The first
rhythmic structure (A) is homophonic and the three instruments rhythmically support
each other and the second rhythmic structure (B) is polyphonic while the clarinet and
the viola play their own rhythmic and melodic line. The middle section is a mirror of
the surrounding sections as A and B subsections change places. The middle section’s
pizzicato and staccato playing contrasts with the surrounding sections.

Figure 7. The form for Kalejdoskop

243
Örjan Sandred

4 Rhythm and structure


As mentioned in the introduction to this article the ideas that led to the OMRC library
were central when I composed Kalejdoskop. I investigated different methods for creating
a rhythmic identity in the piece. There are two possible ways of doing this in the OMRC
library: rules for the rhythm are either applied to a number of allowed note values or to
a number of rhythmic fragments. In the first solution the rhythmic identity is defined in
the rules. In the second solution the rhythm fragments already contain some rhythmic
identity and the rules serve more as tools to build longer sequences out of the fragments.
Both methods are used in Kalejdoskop.

Figure 8. The three hierarchical layers in the rhythmic structure

The rhythm structure is built hierarchically. The most fundamental hierarchical layer
represents the most abstract level and is closest to the form of the piece (the form layer ).
The harmonic rhythm is an intermediate layer (the harmony layer ). The third layer is
the performed rhythm in the music (the rhythm layer ). The connections between the
layers are based on the onsets (i.e. the starting points) of the events. The onsets of the
events in the slimmest (or sparsest) form layer also have to serve as onsets for events in
the harmony layer. The harmony layer may however contain onsets for events between
the onsets in the form layer. The form layer can be seen as a rhythmic simplification of
the harmony layer. The rhythm layer has the same relation to the harmony layer as the
harmony layer has to the form layer i.e. the onsets for events in the harmony layer have
to serve as onsets for events in the rhythm layer. The rhythm layer can contain onsets
for events in between onsets in the harmony layer.
The hierarchical structure is the fundamental rule for the rhythms and constitutes a
fundamental concept of the piece. Each hierarchical layer also has its own set of rules.
There are two types of rules: strict rules that do not allow exceptions and rules that will
be followed when possible (referred to as ”tendencies” in the text). The rules for different
sections in the piece are described below.
The first A1 subsections in the piece are all built in the same manner. The form
layer consists of an even pulse that returns every 9th quarter note. This pulse is not
emphasized in any instrument but it is still present since it influences the phrasing and
the rhythmic gesture (see below).
The harmony layer is based on four note values: a whole note, a dotted quarter note,
a dotted eighth note and a sixteenth note. The note values can come in any order and
they can be repeated (see figure 9). There is however a tendency for the harmonic rhythm
to start with the shortest note value (the sixteenth note) and proceed step by step with

244
Kalejdoskop for Clarinet, Viola and Piano

increasingly greater note values thus creating a ritardando in note values. After this
ritardando the tendency is in the opposite direction, i.e. an accelerando in note values.
This rhythmic gesture takes place within one pulse in the form layer (nine quarter notes)
and then starts over again. The rhythmic gesture exists as long as it is not in conflict
with the hierarchical structure discussed above.

Figure 9. The basic note values in the A1 subsections

The rhythm layer is based on six note values: an eighth note tied to a eighth note
triplet, a dotted eighth note, an eighth note, an eighth note triplet, a sixteenth note
and a sixteenth note quintuplet. The hierarchical connection to the harmony layer is a
constraint for the choice of note values. There is one tendency and one rule that work
together in the rhythm layer: the tendency is for note values to be immediately repeated
but the rule forbids more than three repeated note values in a row, with the exception of
the sixteenth note quintuplet that is allowed a maximum of five in a row. Triplets and
quintuplets are not allowed to start offbeat.

Figure 10. The form layer, harmony layer and rhythm layer at the beginning of Kalejdoskop.
The final score for this section can be seen in figure 17

To summarize the rhythmical language in the A1 subsections: three identical note


values often succeed each other. There is a hidden rhythmical gesture in the harmony
layer (a ritardando followed by an accelerando). This gesture affects the phrasing in the
rhythm layer. There are no other controls over the rhythmical language.

245
Örjan Sandred

The rhythm in the piece becomes increasingly directed, and in greater detail, by the
system. The rhythmic layer in the last A3 subsections are based on pre-composed rhyth-
mic motifs (or parts of motifs) instead of single note values. In this way the rhythmical
language is more predefined.
Just as in the A1 subsections, the form layer in the A3 subsections consists of an
even pulse that returns every 9th quarter note. The note values in the harmony layer are
expanded to seven: a whole note, a quarter note tied to a quarter note triplet, a dotted
quarter note, a quarter note tied to a sixteenth note sixtuplet, a quarter note triplet, a
eighth note triplet and a sixteenth note. The note values are allowed to come in any order
and can be repeated. As with the A1 subsections, there is a tendency for a rhythmical
gesture to form in the harmony layer. In the A3 subsections the gesture consists of an
accelerando that starts over at every new pulse in the form layer.

Figure 11. The basic note values and motifs in the A3 subsections

The rhythm layer is based on eight rhythmical motifs or fragments of motifs (see
figure 11). Because of this, the hierarchical connection to the harmony layer becomes a
harder puzzle to solve than in the A1 subsections. Two more rules influence the choice
of motif: a motif cannot be immediately repeated, and triplets and quintuplets are not
allowed to start offbeat.

Figure 12. The form layer, harmony layer and rhythm layer at measure 78 (the beginning of
the first A3 section). The performed rhythm is marked with gray. The final score for these
measures can be seen in figure 18

246
Kalejdoskop for Clarinet, Viola and Piano

To summarize the A3 subsections: the rhythmical language is more complex but also
more controlled than that of the A1 subsections. The language is mainly defined in
the pre-composed motifs. The system builds longer sequences of the motifs, taking the
hierarchical structure into consideration. The only exception from the strict structure in
the piece can be seen in figure 12 and 18: since the music makes sudden jumps between
rapid passing notes and sustained chords, the performed rhythm originate alternately
from the rhythm layer and the harmony layer.
Opposite to the A subsections, the B subsections have two polyphonic voices. Because
of this there are two rhythm layers. Both rhythm layers have a hierarchical relation to
one single harmony layer (see figure 13). The B1, B2 and B3 subsections that exist in
the piece have a similar construction. The B1 subsection is used as an example below.

Figure 13. The hierarchy between layers in the B subsections

The form layer contains one single pulse that has the same length as the length of
the whole subsection. The harmony layer is based on six note values: a whole note, a
dotted quarter note, a quarter note tied to a sixteenth note in a sixtuplet, a dotted eighth
note, a eight note triplet and a sixteenth note. The note values can come in any order
but there is a tendency to avoid immediate repetition. There is no rhythmical gesture
similar to the gesture in the A subsections. Instead, the much more complex hierarchical
connection to the two rhythm layers affects the choice of note value.

Figure 14. The basic note values and motifs in the B subsections

The rhythm layers are based on eight note values and seven rhythmic motifs (see
figure 14). The note values and the motifs are allowed to come in any order with the
exception of the immediate repetition of a note value or motif. To some extent, the two

247
Örjan Sandred

layers are built independently of each other. There is however a tendency that at a point
where a single note value is chosen for one of the rhythm layers, a motif is preferred in
the other rhythm layer and vice versa. The interplay between the rhythm layers will
create a counterpoint, where one voice has longer note values while the other voice has a
clear profile with more elaborate rhythmic motifs in shorter note values. The two rhythm
layers are connected since they are built on the same harmony layer.

Figure 15. The form layer, harmony layer and two rhythm layers at measure 17 (the beginning
of the first B1 section). The final score for these measures can be seen in figure 19

Figure 16. The OpenMusic patch for the computation of the A1 sections

248
Kalejdoskop for Clarinet, Viola and Piano

The rhythmical structure was completely created on the computer for the entire
Kalejdoskop piece. If the computer reached a dead end during calculation, it was not
decided in advance which layer should be changed. It is possible that the harmony layer
was adjusted to fit the rhythm layer as well as the reverse. The rules cannot be broken
but it is always possible to compromise with the tendencies. In figure 16 you can see the
OpenMusic patch that was used for the computation of the A1 sections. In computer
parlance tendencies are called heuristic rules.
The form of the piece influences the choice of single note values since the hierarchy
and the rules link the form via the harmonic rhythm to the note values. The rhythmical
system was very strictly applied in Kalejdoskop. A somewhat freer use of the system can
be found in Amanzule Voices for violoncello and live electronics.

Figure 17. The beginning of the piece (see also figure 10)

Figure 18. The beginning of the A3 section, measure 78 (see also figure 12)

249
Örjan Sandred

Figure 19. The beginning of the B1 section, measure 17 (see also figure 15)

A more general description of the ideas behind the OMRC library can be found in
the PRISMA review no.1.

5 Summary
The method of composition described in this article has as its starting point a single
musical building block as a single interval or a single rhythmic motif. The method
differs from methods used by Xenakis for example. Xenakis focuses on the total effect
of the musical parameters. Single pitches and rhythmic motifs are left to mathematical
formulas. In Kalejdoskop there is a relationship between elaborate details (sometimes
designed in advance) and the overall impression.
Kalejdoskop was commissioned by the Swedish Arts Grants Committee, was com-
posed in 1999 and premiered on October 2 of the same year at the Modern Museum
of Contemporary Art in Stockholm. The piece is recorded on the CD ”Obscura” (dB
productions 2002), and the score is available at the Swedish Music Information Center.

250
Bibliography
[1] Laursen, M. : PATCHWORK: A Visual Programming Language and Some Musical
Applications. Doctoral disertation, Sibelius Academy, Helsinki, Finland, 1996.

[2] Sandred, Ö.: Kalejdoskop. Swedish Music Information Centre, Stockholm, Sweden,
1999.

[3] Sandred, Ö.: OpenMusic RC library version 1.1. IRCAM, Paris, France, 2000.

[4] Sandred, Ö.: OpenMusic RC library version 1.1, tutorial. IRCAM, Paris, France,
2000.

[5] Sandred, Ö.: Searching for a rhythmicla language in PRISMA no 1. EuresisEdizioni,


Milano, Italy., 2003

Örjan Sandred
Örjan Sandred
(born 1964) grew
up in Uppsala,
Sweden. After
having studied
Musicology at
the University, he
moved to Stock-
holm to attend
the composition
program at the
Royal College of Music 1985-97. He took private lessons in Copenhagen and London
with Poul Ruders and in 1994 - 95 he studied composition at McGill University
in Montreal, Canada. In 1997 - 98 he attended the annual course in composition
and musical computing at IRCAM in Paris. Among his teachers are Sven-David
Sandstrˆm, Magnus Lindberg, Per Lindgren, Daniel Bertz, Bruce Mather, Bruce
Pennycook (computer music) and Bill Brunson (electro-acoustic music).

As a composer, Sandred has regularly received commissions from Swedish


and foreign music institutions. He has written music for different types of en-
sembles, including symphony orchestra, ensembles with live electronics, chamber
music and electro-acoustic music. Among his pieces are Polykrom (for chamber
orchestra), Cracks and Corrosion I and II (I for piano quintet and II for guitar and
live electronics) and Amanzule Voices (for cello and live electronics).

251
Örjan Sandred

Sandred taught electro-acoustic music and composition at the Royal College


of Music in Stockholm from 1998 - 2005. Since 2005 he has been professor in
Composition at the University of Manitoba in Winnipeg, Canada. He was guest
lecturer at several institutes, for example the Sibelius Academy, Helsinki, the
Conservatoire National Superieur de Musique, Paris and the Bartok Seminar in
Szombathely, Hungary.

He is a member of the international musical research group PRISMA (Peda-


gogia e Ricerca Internazionale sui Sistemi Musicali Assistiti). PRISMA is formed
by composers and was created within Centro Tempo Reale (Florence).

252
Flexible Time Flow,
Set-Theory and Constraints
- Kilian Sprotte -

Abstract. This article presents several examples of my use of OM as a composi-


tional tool in the process of writing the piece Fernweh. It was useful in the domains of
harmony and rhythm, sometimes as a simple and rapid scratchpad, but also for solving
constraint satisfaction problems, allowing me to investigate a comparably large space of
combinatorial possibilities.
***

1 Introduction
How can two poles - a great distance apart - be made to meet? Each pole has its own
timeflow. If neither changes its tempo, they will simply run in parallel and meet in infinity.
Only a flexible flow of time will enable them to approach each other. Acceleration and
deceleration will bring them closer and move them apart respectively.
This is the initial idea for starting the piece Fernweh for cello, accordion and per-
cussion, which I completed in February 2003. It was premiered on April 4, at the Früh-
jahrstage für zeitgenössische Musik in Weimar, by the ensemble Klangwerkstatt Weimar.
The title Fernweh brings together distance and grief, which can be described as the
”opposite of homesickness”. It was the notion of farness and nearness that interested me,
a notion that can also be regarded as a play of dependence and independence between
the two poles.
As for the instrumentation, the cello mainly plays the role of one pole, while accordion
and percussion represent the other. From time to time however they contribute to the
layer played by the cello.
On the most obvious level of the form, the piece is divided into two parts (the be-
ginning of each part lasting 4 and 2 minutes respectively, see figures 1 and 14. I will
refer to them as the first and second part. Since this essay deals mainly with complex
aspects of CAC - this has been my first in-depth contact with CAC - my approach has
been somewhat like a patchwork, without a great deal of global formalisation.
In the process of getting to know the various CAC procedures, I became increasingly
aware of a possible categorization into two groups that in my opinion show a distinct
difference in quality: whereas some CAC tools can be considered as a useful but simple
extension of ordinary means, there is a second category, in which the actual algorithm is
automatically generated on the base of a set user-defined constraints, and as a result of
which one enters a very different domain, going far beyond the mere use of the computer
merely as a faster scratchpad. It enables the composer to solve problems of great com-
plexity, and opens up new possibilities, exceeding what can be done with pen and paper,
all thanks to technology.

253
Kilian Sprotte

Figure 1.

2 First part
2.1 Rhythmic aspects
Flexible time flow
The sujet of two dialectic poles has first of all found its expression in the time flow
domain. It exists by superimposing different constant tempi, but also by combining a
constant and a (continuously) changing tempo.

254
Flexible Time Flow, Set-Theory and Constraints

OM showed itself to be a very helpful tool especially when working with this flexible
time flow, allowing me to lay down rhythms as if they were played with constantly
changing tempo (whether ritardando or accelerando) thus making it possible for me to
superimpose them on other, differently treated layers.
The possibility of rewriting a given pulse in a different tempo, while retaining the
same sound and pitch, is obviously of great interest. This principle is shown in figure 2:
Tempo 90 is being represented in tempo 60.

Figure 2.

What happens when the tempo of the rhythm being transcribed needs to be flexible?
An example of this is shown in the accelerando that leads from 60 to 90 in figure 3. The
note durations in tempo 60 have increasingly shrinking durations.

Figure 3.

In order to deal with these rewriting issues, I created two functions for OM, namely
notdur->realdur and realdur->notdur that are able to convert between the notated
duration values and the sounded duration values. A mathematically smooth accelerando
as required in the second example will be defined as a linear interpolation on a logarithmic
tempo scale: the ratio between different tempi is more relevant than their difference
(Stockhausen, 1963).
Figure 4 shows a patch in which a rhythmical fragment from Fernweh is converted
from the notation of the desired sounding result in tempo 60 to its notation under a
ritardando curve. The list (125 1875 667...) shows the desired durations in ms that get
converted to the following list with increasingly shorter durations (125 1755 567...) that
can now be notated with a a 60 constant tempo quantization. The second parameter (60
11000 30) is used to describe the tempo curve as a break-point function with alternating
tempo value and inter-point duration.

255
Kilian Sprotte

Figure 4.

Hierarchic rhythm

For the general rhythmic structuring of the first part, I have used a top-down approach.
Starting from a framework of time points covering the first four minutes, I have inserted
between them ”complex rhythm cells” that have were also constructed by hierarchically
combining different ”simple rhythm cells”, chosen from a pool of about 80 cells. A patch
that allows the combination of those simple cells into a more complex structure can be
seen in figure 5. I use the input in the top left corner to insert a list, which describes the
resulting rhythm. In this case: (28 (37 (76r (2n 37r 4n)) 75)). Or more clearly in
figure 6.

Figure 5.

256
Flexible Time Flow, Set-Theory and Constraints

Figure 6.

A number represents an index of one simple rhythm of the pool. The elements used in
this example are shown with an arbitrary eighth-note pulse as their unit. In the notation
a list (<head> (<tail>)) is used to describe a subdivision of the rhythmical pattern from
<head> by the patterns that are listed in (<tail>), in which succeeding elements refer to
the durations of <head>, where they are filled in. As in this example, expressions can be
nested. If an ”r” is added to a number, it means that the pattern is reversed. A notation
like ”2n” means that 2 duration values of the super pattern are not subdivided.
The result of this description is shown in figure 7. By anchoring this and other
patterns in the time points framework, they are once more treated like a sequence of
proportions that are to be individually stretched. Finally, the KANT library has been
used to quantify the resulting (polyphonic) rhythm.

Figure 7.

Aperiodicity

In this section, I examine hierarchic rhythmic structures anew. I describe a rhythm as


aperiodic if it does not contain repeated duration values and if this is also true for any
other abstract level. The (aperiodic) structure (1 3 2 4) can be seen in figure 8 with all
its related super structures. None contain direct repetitions.
I have taken advantage of this property in writing a short sequence for temple blocks
(first staff of figure 9). Each of the 5 blocks can be thought of as one continuous voice
(staffs below). All the voices are aperiodic. This sequence has been generated using the
pmc-engine. In addition to the aperiodicity, some rules concerning playing possibilities
were applied (see Second part, Harmonic aspects).

257
Kilian Sprotte

Figure 8.

Figure 9.

2.2 Harmonic aspects


Selection of an initial set
Many of the harmonic structures of the piece are based on one initially chosen set of five
pitch-classes: (0 1 3 6 7). It was selected from the list of all possible prime forms of Z12
(see the table in figure 10 for a list of prime forms. Consecutive sets marked a and b are
inversions of each other, symmetric sets are marked s).
Prefiltering has been carried out according to two criteria:

1 The set should contain all the interval classes.

2 The set should be combinatorial.

Combinatoriality for sets of cardinality other than 6 has been defined, following the
concept of combinatorial hexachords. In this sense, a set is called combinatorial, marked

258
Flexible Time Flow, Set-Theory and Constraints

Figure 10.

c if it is a subset of its complement and inverse-combinatorial (marked inv − c), and if


its inversion is a subset of its complement.
Interestingly, no all-intervallic set is combinatorial and vice versa. Therefore, I have
chosen to use the inverse-combinatoriality property (given for every set, except (0 1 3 5
6)).
From the remaining possibilities, I intuitively chose to use (0 1 3 6 7) and its inversion
(0 1 4 6 7) as initial sets. In figure 11, you can see a circular representation of (0 1 3 6
7) and its complement. Using these features, a twelve-tone-row can be constructed.

Figure 11.

Constrained based harmony


A 148 chord progression has been established as a harmonic framework for the first part.
It is based on the sets (0 1 3 6 7) and (0 1 4 6 7), as well as their complements (0 1 2 3
6 8 9) and (0 1 2 3 6 7 9).

259
Kilian Sprotte

For each of these four sets, all the possible registral distributions in the space of two
octaves have been generated. Figure 12 shows as an example all the 46 possibilities for
the set (0 1 3 6 7). Starting from these, I have made intuitive selections (marked with
*), which I used as a pool of chords that can be transposed, but not changed in their
registral layout.

Figure 12.

I have used the PWConstraints pmc-engine, in order to generate the chord sequence
based on this chord pool, applying the following rules:
• no octaves in outer voices, meaning no pitch-class duplicates, whether in the suc-
ceeding pitches of soprano or bass, or crosswise,
• in a sequence of 16 chords there must be no chord-duplicates (regardless of trans-
position),
• in neither the soprano nor the bass voice may any pitch-class be duplicated in a
succession of 4 pitches,
• common pitch-classes between adjacent chords must be close to predefined values,
• there must always be as many common pitches (not pitch-classes) as possible be-
tween adjacent chords (as a heuristic rule).
Figure 13 shows the first 9 chords of the resulting sequence, used as a framework for
figure 1.

3 Second part
3.1 Rhythmic aspects
The second part (for its beginning, see figure 14) is entirely notated using two super-
imposed tempo layers ( / ). The predominant rhythm on the faster layer is a

260
Flexible Time Flow, Set-Theory and Constraints

Figure 13.

continuous sixteenth-note pulse; it is equivalent to a quintuplet sixteenth division in the


other tempo.

Figure 14.

Formally, this part can be subdivided into four sections of equal length (see figure
15). The A sections share the same rhythm. For their changing characteristics see below.

Figure 15.

261
Kilian Sprotte

In the other layer, there is again a process of continuously changing time flow, but
much longer in its total duration. Section C consists of a rhythmic ostinato, with a
steadily increasing tempo. Because this accelerando is so spread out, it is actually incor-
porated into the written rhythm, being played at the constant tempo of . The shape of
the ostinato pattern itself is also changed in such a way that at the end of the process it
becomes a completely regular pattern, coinciding with the other layer.

3.2 Harmonic aspects


The pitch structure of the A sections consists of a common sequence of pitch-classes
based on the twelve-tone row created from the initial set of five notes (see figure 11). An
excerpt of this is shown in figure 16, which relates to a former version of the piece for
cello and piano. Some of the pitch-classes are missing in the piano part. They have been
filtered out and have served as a framework for the cello part. The registral distribution
of the pitch classes of the piano were controlled by a sequence of changing pitch fields.

Figure 16.

262
Flexible Time Flow, Set-Theory and Constraints

When I transcribed this sequence for new instruments, I wanted it to be played by


marimba and accordion. My idea was to change only the registral distribution in order
to make it playable for these two instruments. At the same time, I wanted to keep to the
idea of only gradually changing pitch fields, a rule which I expressed as follows:
Each pitch-class has to stay in its register at least four times before the register of a
recurring pitch-class can be changed.
This rule (as a musical constraint) was confronted with a set of rules modelling the
playing capabilities (as a technical constraint) for each instrument.

• For the marimba, the system attempts to use alternating strokes for each hand
(assuming only two hammers), but also allows the crossing of hands and double
strokes, if certain conditions are fulfilled.

• For the accordion player, who like the marimba player, splits the monodic line
between hands, the jumps in each hand are kept to a minimum.

The case of the woodblock described above (see First part, Aperiodicity) was treated
similarly to the marimba. A set of simplified rules was used.

4 Conclusion
The more I worked with the various techniques presented above the more I felt the need
to unify them into one system. I believe it would be ideal to use a constraint-solving
engine as a kernel algorithm, around which all the other tools can be developed.
An encouraging starting point for me was the example of the marimba described
above. I think the most natural approach would be to formulate rules in different domains
such as rhythm, harmony, playing technique etc. and ”solve” them simultaneously in one
system or at least, ”find the best compromise” between conflicting rules (irrespective of
the domain to which they belong!)
Unfortunately, this would not be an easy task. Even if one were to limit the applica-
tion to generating polyphony and to controlling the harmonic and rhythmic structures of
each voice (as well as those of the whole score), there would be considerable difficulties.
The pmc-score engine of the PWConstraints library (Laurson, 1996), for example, only
works on predefined rhythmic structure.
There are, however some new developments in this field (Anders, 2002). In connection
with the constraint solving strategy of ”propagate and distribute”, it has become possible
to generate a score even where the rhythmic structure is not known beforehand. When
the system is in the middle of a search, the next score element to be generated would be
the one with earliest starting time.

263
Bibliography
[1] Anders, T.: A wizard’s aid: efficient music constraint programming with O. ICMC
2002.
[2] Forte, A. : The structure of Atonal Music Yale University Press, New Haven, Con-
necticut, 1973.

[3] Laurson, M.: PatchWork, PWConstraints. IRCAM, Paris, 1996.


[4] Lester J. : Analytic approaches to twentieth-century music Norton & Company, New
York, 1989.
[5] Sprotte, K. : OMGroups 1.0 IRCAM, Paris, 2003.

[6] Stockhausen, K. : ”...wie die Zeit vergeht...,” DuMont, Munich, 1963.

Kilian Sprotte
Kilian Sprotte, born in 1977 in Ham-
burg, started violin studies at the
age of seven with Isabella Petros-
jan. Later, he spent a year at the
Brockwood Park School founded by
J. Krishnamurti. He has studied com-
position with Younghi Pagh-Paan
since 2000 and has attended various
master-classes with Klaus Huber and
Brian Ferneyhough. He was strongly
attracted to the field of CAC, spend-
ing a year at IRCAM as guest researcher. Following this experience, he has worked
constantly to improve a number of software tools, among them generational ap-
plications employing constraint programming techniques. He is also working upon
music notation applications. The group PRISMA has on several occasions invited
him to give talks on his research. His works have a strong focus on instrumental
music, but include electronic sounds and video (museum Weserburg 2001). He re-
ceived an award for his composition Fernweh at the Weimarer Frühjahrstage 2003
and obtained a grant from the DAAD.

265
To Touch the Inner Sound, Before it
Becomes Music; to Dream About
Dreams, Before they Become Real
- Elaine Thomazi-Freitas -

Abstract. This paper describes a composer’s approach to her use of OpenMusic’s


tools during the various stages in composing a musical work ( Derrière la Pensée, for
nine instruments). Rather than engage in a deeply technical discussion of the software,
the present article is more concerned with the intuitive aspects, and the intricacies of
human/machine interaction in music.
***

1 Introduction
“Le dernier mot sera la quatrième dimension.
Longueur : elle en train de parler
Largeur : derrière la pensée
Profondeur : moi en train de parler d’elle, des faits
et sentiments et de son arrière-pensée

Je dois être lisible jusque dans le noir” (Lispector, 1998).


The starting point for the composition presented here was the analysis of recorded sea
and wind sounds. Seeking for a more personal approach, I included processed versions
of these sounds (by convolution), an approach that is close to another personal current
field of research, the marriage of ambient sound colors with musical material. After the
first step, the resulting groups of sounds were combined and translated into instrumental
language by means of alterations made to their spectral content. In the course of this
process, I abandoned the direct approach towards musical form and content, and decided
to work with individual groups of sounds as they were being created, one at a time,
controlling them directly more or less in a ’concrete’ way, before ascribing any musical
meaning or context to them.
Much of the underlying context for the compositional process is based on the idea
of the moment of volition that precedes actual thought, or rather the initial idea that
becomes the act of thinking. This concept is taken from a book by the Brazilian writer
Clarice Lispector, Un Souffle de Vie 1 (Um Sopro de Vida). I was determined to maintain

1 The choice of a French translation of the book has had a certain impact upon the composition, being

the title of the piece extracted from this short passage referred to above. So far, I have not been able to
find an English version, so all the references in the present paper are from the French edition.

267
Elaine Thomazi-Freitas

this concept throughout my work and closely examined every single sound, with a view
to extracting the more primitive aspects of each. Because the sound sources come from
nature the material is essentially primitive, as is the meaning, at least in the form that
they are used in the music under discussion.

2 Musical source
The choice for the musical source of short samples of recorded sea and wind sounds is
linked to the idea of extracting musical gestures out of extremely rich sound spectra.
Although these nature sounds have a rather chaotic content, they are more like white
noise spectrum than inharmonic (or inharmonious) spectral data. To extract music out
of the latter was to run the risk of making a commonplace interpretation of the movement
of the sound itself, and not discovering any new material. One way of avoiding such an
outcome was to process the samples by convolution with voice sounds, which contributed
to the enrichment of their spectral content. The resulting sounds can be described more
or less as a sort of sculpted white noise, as if some of the specific inner parts had been
emphasized in color without losing the richness of the overall chaotic character.
The lengths of the analyzed samples ranged from less than 1 second to 7 seconds.
Initially, these samples were analyzed with the software AudioSculpt, and the resulting
data was interpreted in OpenMusic by means of the ’Audiosculpt to OpenMusic’ function
(as->om, repmus library). This incoming data, arranged as a sequence of chords repre-
senting a simplification of its spectral content, was filtered by the seq-extract function
(om-tristan library), subsequently resulting in a bank of chord sequences that constitutes
the raw material for the composition. The final piece is written for nine instruments:
flute, B[ clarinet, oboe, French horn, piano, violin, viola, cello, and double bass. This in-
strumentation was displayed in OpenMusic by MIDI instruments, a very useful resource
that provided an approximation of the real ensemble in the process of conception of the
work.

3 OpenMusic source
OpenMusic has been used in my composition via om-tristan library, created for work on
spectral data, conceived by the French composer Tristan Murail, with whom I worked
during my doctoral studies at Columbia University, in New York City. Once the con-
nection between AudioSculpt and OpenMusic is made, the retrieved data goes through
a series of manipulations using specific functions from the library.
The om-tristan library communicates with other applications mainly via the Au-
dioSculpt spectral analysis data import function (using the ”export partials” command).
A single OM object, as->om (Audiosculpt to OpenMusic), reads the incoming data and
converts it into a specific format to be processed by other OM objects. The data includes
a text module of the analysis data; minimum and maximum amplitude values, scaled as
MIDI velocities; a delta value that forces the grouping of close events into chords; limit
values (max and min) in midicents, defining the allowed pitch range; microtonal approx-
imation (i.e., whole tone, semi-tone, quarter-tone, and eighth-tone); and a value that
reduces the polyphony, by retaining louder partials as the most significant content. After
processing all this information, the object outputs a list of chords, which can be read by

268
To Touch the Inner Sound, Before it Becomes Music...

a chord sequence module. om-tristan’s objects work by means of procedures that can be
grouped into the following domains:

• Frequency • Conversions
• Control • Groups
• Intervals • List’s treatments
• Files • Board Operations
• Combinatorial • Numerical series
• Functions • Treatment of functions
• Chaos • Aleatoric
• Arithmetics • Midi
• Objects

In my composition, I focused chiefly on frequency manipulation. The main steps, car-


ried out with the sample analysis data, start with the extraction of the melodic motives,
the selection and manipulation of harmonic content, and the manipulation of these two
bodies of musical material in order to generate the musical content of the piece. The
function diamanter was used thoughout the compositional process, and was responsi-
ble for the earlier data manipulations that generated the melodic material of the piece
(cf. figure 1). This function executes a calculation of the incoming chord-seq data,
processes each of the chords in the list (an aleatoric function, in accordance with a per-
centage value expressed in the function parameters) and produces a corresponding nth
harmonic, creating a new chord sequence. This function was used here in such a way as to
allow me to extract a single melody from the complex polyphonic sequences contained in
these first steps. Subsequently, the diamanter function was employed to generate com-
plex arpeggio-like gestures that caracterize a second type of structural material, more
obviously present in the second half of the piece.
Regarding the harmonic materials, the imported analysis data was passed through
a set of distortion fuctions, such as: freq-distor, ch-distor, ch-modif, densifier,
fshift-proc (cf. figure 2), besides being subjected to my personal choice when simpli-
fying the resulting contents, according to purely subjective musical criteria. A further
step was the creation of harmonic fields for the structuring of bridge passages between
sections. This was done by means of the fq-interpol function. When all this musical
basis was structured and ready to go, the individual materials were processed several
times through the above set of functions, in an empirical manner, until a satisfatory
musical result was achieved. It is difficult to decide whether or not the final result is
faithful to the initial raw material. However, the guiding line for the entire composition
was that these sounds all come from nature.
Another set of functions, such as: deformer%, stretch, l-distort/3, l*curb, be-
sides diamanter, were occasionally applied to a whole chord sequence, and at other times
to a single specific chord, allowing for wide variations along both the horizontal and the
vertical dimensions. These processes became more evident towards the second half of
the piece , where I was working with increasingly complex material, and the final re-
sults were quite unexpected. Another group of aleatoric functions was used earlier in the
composition, affecting only the horizontal domain.
As I mentioned above, the main concern during the compositional process was to avoid
an approach based on musical form and structure, and instead focus upon individual
musical cells. A later stage in this process was that of assigning the instrument’s colors

269
Elaine Thomazi-Freitas

Figure 1. Melodic extraction from imported AudioSculpt analysis

Figure 2. Distortion functions on harmonic materials

270
To Touch the Inner Sound, Before it Becomes Music...

to these little gestures, even before positioning them on the time scale of the piece. It is
of some interest to look at an orchestration example, carried out by MIDI representation
of the instruments (figure 3).

Figure 3. Example of orchestration inside a multi-seq

4 Maquette

Another important feature of OpenMusic was the maquette. This resource made it pos-
sible for me to apply a continuous back-and-forth (or empirical) procedure while exper-
imenting with the newly created musical material and situating it in musical space (an
aspect that is more obvious in the final stages of the composition).
When all the main calculations for the musical content had been carried out, the
various types of material were structurally placed on the first maquette, using an empir-
ical approach. During this stage a very important aspect was the maquette’s flexibility.
At last I felt as if I was getting to the core of the piece, that my preoccupations were
less technical, leaving place for musical experimentation, where ”real thought” began (to
borrow Lispector’s imagery). Also during this stage of the process, the maquette2obj
function was used, acting as a bridge between the initial sketches and the new material
used in the final piece. Here are diagrams (figures 4 and 5) of the first version of the
maquette (with a fragmentary internal structure) and the final version (in which the
blocks constitute the main sections of the piece).

271
Elaine Thomazi-Freitas

Figure 4. First version of the maquette

Figure 5. Last version of the maquette

5 Finale
In the final stage, the interface with Finale was another important tool, providing an eas-
ier way of writing the final score directly from OpenMusic’s workspace. The interfacing
was carried out via the ETF (enigma transportable files) resource, directly from either
voice objects or poly objects. After the whole piece had been assembled into a final
version maquette it was imported (via the maquette2obj function) into a fairly complex
patch and broken down into smaller fragments. Each of these fragments had been indi-
vidually quantized (for purposes of simplifying the rhythmic notation) and exported to a

272
To Touch the Inner Sound, Before it Becomes Music...

poly object, which enables the finished material to be directly transferred into Finale, the
musical notation application. A few minor adjustments were needed when reassembling
the music in Finale, but considering the level of complexity of the score I was dealing
with OpenMusic, they were almost negligible. The reader is invited to compare the two
following figures (figure 6), one showing the excerpt in its final musical notation in Finale,
the other showing how it looked inside a poly object in OpenMusic. The minor rhythmic
adjustments were made for purposes of simplifying the musical notation. In the Finale
application, this process of score simplification had already been carried out. Nothing
was changed or lost in the course of the transfer between the two applications.

Figure 6. OpenMusic to Finale’s transfer

6 Aesthetic aspects
Notwithstanding all the preceding discourse, organized in neat logical steps, based on
content that has undergone profound transformation, as well as generating forms out of
the relationship between time and space, not to mention the act of taking nature as a
model - none of this would be worth anything if it were not music. Now that the most
difficult task is over, that of making music, it is time to try to understand the process
with the help (or in spite of the interference!) of the written word. I would like to take
the liberty of drawing a parallel between my work Derrière la Pensée and the poetical
reference borrowed from Lispector’s book, Un Souffle de Vie, that provided the structural
guideline for this musical composition.
Lispector’s work has a deep poetical structure and an imagery that is exemplary in its
richness of metaphor and variety of subject and situation. I often find myself drawn into
a strange, different world when reading it, a world where the flow of time almost always
joins the present. I get the impression that the author has been jotting words down on
paper even as the act of thinking unfolds, in a deeply intuitive way. I believe that much
of my work is created in the same intuitive manner. In addition to the poetical reference,
I also borrowed a structural reference taken from the author’s questions on the process
of artistic creation and its relationship to intellectual thought:

273
Elaine Thomazi-Freitas

“Je suppose que le compositeur d’une symphonie a seulement la ‘pensée avant


la pensée’, que ce qui se voitdans cette rapidissime idée muette est un peut plus
qu’une atmosphère? Non. À vrai dire c’est une atmosphère qui, déjà colorée
avec le symbole, me fait sentir l’air de l’atmosphère d’ou vient tout. La pré-
pensée est en noir et blanc. La pensée avec des mots a d’autres couleurs. La
pré-pensée est le pré-instant. La pré-pensée est le passé immédiat de l’instant.
Penser est la concrétisation, matérialisation de ce qui a été pré-pensé. À vrai
dire, la pré-pensée est ce qui nous guide, car elle est intimement liée à ma
muette inconscience. La pré-pensée n’est pas rationnelle. Elle est presque
vierge.
[Are we to suppose that the composer of a symphony experiences nothing other
than ”thought before thought”, something that can be discerned in the extremely
rapid silent idea that is little more than an atmosphere? No. In truth, it is an
atmosphere that, already coloured by the symbol, gives me the impression of
the air of the atmosphere from which all things flow. Pre-thought is in black
and white. Thought with words is made up of other colours. Pre-thought
is the pre-instant. Pre-thought is the immediate past of the present instant.
Thinking is the realisation, the materialization of that which was pre-thought.
In truth, it is pre-thought that guides us, because it is intimately linked to
silent unconsciousness. Pre-thought is not rational. It is almost virginal.]”
(Lispector, 1978)

It was with this idea in mind that my work Derrière la Pensée was conceived, it was
almost a feeling that precedes the musical idea itself, that subsequently was transformed
to music. To speak of raw material that is sampled and then transformed and rebuilt
with a new meaning, sounds so much like a causal chain. How can causality be applied
to playing with colours and raw elements, how can it describe the steps that led to a
final finished musical work?
Music carries its own silences, and it is the music within music that is the metaphor
of pre-thought, the music that touches inner sound, before it becomes music.
The simplest way to summarise this is as follows:

7 Conclusion
One of the most important aspects when working with OpenMusic is the flexibility and
the multiplicity of interfaces. More than just software for Computer Assisted Composi-
tion, it provides sufficient resources for a dynamic composition process, allowing empirical
experimental data manipulation. The maquette and the direct link to Finale, via ETF,

274
To Touch the Inner Sound, Before it Becomes Music...

are important features that make OM into a complete package. Another important as-
pect is Tristan Murail’s research on spectral materials. The use of om-tristan library
opened up for me a new approach to sound in acoustic composition, helping me to move
forward in my work in computer music.
Derrière la Pensée was commissioned by Speculum Musicae, Ensemble in Residence
at the Music Department of Columbia University. It was composed in 2001-02, and
premiered in May 2002 at Miller Theatre, New York City.

275
Bibliography
[1] Lispector, C. (1978), Un Souffle de Vie (Um Sopro de Vida). Traduit du brésilien
par Jacques et Teresa Thiériot (1998), Des Femmes – Antoinette Fouque, Paris.

Elaine Thomazi-Freitas
Composer Elaine Thomazi Freitas
was born in Brazil in 1970. She
received a Masters degree from the
Federal University of Rio de Janeiro,
and completed a doctoral program
at Columbia University, in New
York, 2003. In 2001 she worked at
IRCAM, Paris, under the direction of
Gerard Assayag and Andrew Gerzso.
Working with Tristan Murail as an
advisor during her studies in the
USA, she started to focus more on computer music, computer-aided composition,
and multimedia.

Her works range from the acoustic repertoire, including solo, chamber, and
orchestral pieces, to pure electroacoustic music. She is a recipient of several
scholarships from the Brazilian Government, as well as from Columbia University.
In 2003, she was short-listed for the Prix SCRIME, composition prize, in Bordeaux,
France.

Actively engaged in the musical scene in Brazil, she developed an interna-


tional carrier throughout Europe and North America after moving to NYC in 1998.

In Brazil her last work has been on a teaching program funded by a gover-
namental agency from Rio de Janeiro, to conduct a research on music and
multimedia within the music department of University of Rio de Janeiro, from
August 2003 to September 2005. At the present, Elaine presents her work in festivals
and concerts as a free-lance composer. In 2005, she had her work performed in Rio
de Janeiro - Brasil, Valparaı́so - Chile, and Dublin - Ireland.

277
Appendix

279
OpenMusic

OpenMusic (OM) is a visual programming environment developed at the Ircam and


dedicated to musical composition. It allows the user to create and to experiment with
compositional models through programming, and by means of a graphic interface that
represents musical structures and processes.
Many of the chapters in this article refer to that environment. In order to understand
these chapters more easily as well as the illustrations, the basic OM concepts will be
dealt with in this appendix.

1 Patches
Patches are graphic representations of the programme. The composer uses them to
connect graphically represented boxes. Each box represents a function. Each box has a
set of inputs (at the top of the box) as well as a set of outputs (at the bottom of the
box) for connecting them to other boxes. These connections define the functional layout
of the programme i.e. what it will do.
OM possesses a number of functions of varying complexity and specialisation. Others
may be created by the user. Figure 1 shows a very simple patch using the functions for the
addition and multiplication of integers. It is mathematically equivalent to (3 + 6) × 100.

Figure 1. A patch that is the mathematical equivalent of (3 + 6) × 100

281
2 A Patch in a patch
If the number 3 in the expression (3 + 6) × 100 is made into a variable, the result is a
single parameter function defined as follows: f (x) = (x+6)×100. Figure 2 shows a patch
corresponding to this function. Graphically speaking the variable x is represented by an
arrow shaped box (at the top of figure 2). The output arrow (at the bottom) specifies
the value that results when the patch is calculated.

Figure 2. A patch that corresponds to the function f (x) = (x + 6) × 100

The patch in figure 2 can in turn be considered as a function call, and can be used
in the form of a graphical box in other patches. This makes it possible to create patch
abstracts, which can then be used in various contexts. Figure 3 shows the patch from
figure 2 in the form of a box, with the number 5 as its argument. The result of the
calculation will therefore be f10
(5)
= (5+6)×100
10 = 110.

Figure 3. A patch in another patch

282
Within the limits of what can be set down on paper, the articles in this book contain
various illustrations for purposes of explaining compositional processes. These illustra-
tions will refer to the main patch in this process as well as to the various internal patches.

3 Objects

Data structures (classes) can be used in the patches in order to create and manipulate
objects (particularly musical ones). This kind of structure is also shown as a graphical
box, with inputs and outputs that give access to its programme content. Figure 4 contains
a note type box. In addition to the first input (and the first output) that correspond
to the note itself, this box has 4 other inputs (and outputs). From left to right, these
correspond to pitch, duration, intensity and Midi channel. The upper note in this patch
receives the value 6700 as a pitch argument (in midicents this is equal to G3). This
value is transmitted to the + box that increments it by 1200 (one octave). The box at
the bottom of the patch receives this new value and creates a new note (G4, one octave
higher).

Figure 4. Musical objects in a patch

Figure 5 shows a more elaborate patch, algorithmically generating a musical sequence.


It contains BPF type objects (break point function), chord-seq objects (chord sequence
represented in absolute time values) and voice (chord sequence in musical rhythmic
notation). Among the most frequently used objects in this book, the reader will also find
a number of other musical classes: chord, poly (polyphony made up of several voices),
midifile, sound (audio file) etc.

283
Figure 5. An example of a musical sequence being created in a patch. The pitches are gener-
ated from the data in a break-point function, and the rhythm is created out of duration value
proportional relationships

4 Editors
Each object class possesses a graphic editor, enabling the display and manual editing
of musical data. Figure 6 shows the graphic editor of the voice type object that was
constructed in the previous example.

Figure 6. Musical editors

284
5 Control
Control structures, usually written in conventional programming language (loops, condi-
tional expressions, etc) are also in the form of boxes, and are used in visual OM patches.
The example in figure 7 uses a conditional box (omif) that checks whether the variable
x is equal to zero. If this is the case, the result of the evaluation (or calculation) is the
note C3. In the opposite case, the result would be the note G3.

Figure 7. A conditional box

The example in figure 8 uses a loop (omloop) that carries out an iteration on a pitch
list (6000 6100 6200 6300 6400 6500 6600 6700 6800 6900 7000 7100 7200), and then
generates a chord sequence from the pitches.

Figure 8. The omloop box carries out an iteration of a pitch list in order to enrich each element
by two additional pitches

285
Figure 9 shows the programme carried out by the omloop box. The pitch list is
represented by the input0 box (the arrow at the top of the patch). The listloop box
receives this list and sends back the elements in the list one by one at each iteration (6000
for the first iteration, 6100 for the second iteration, etc.) Each pitch is transformed into
a chord by adding a minor third (300) and a fifth (700). The result of each iteration is
collected into a list by the collect box. Once the last value in the list has been reached
(7200) the collected list is transmitted to the finally box, which will be the result of
the omloop box.

Figure 9. A patch linked to the omloop box from figure 8

6 Maquettes
A maquette is an original interface designed for the purpose of uniting the programme
and the score in a single display.

Figure 10. Example of a maquette

286
This is a 2-dimensional space in which graphic boxes may be laid out and organised.
A box may contain an isolated musical object, a programme (patch) that generates an
object, or it can contain an internal maquette. Boxes may be connected to each other
and calculated as in a patch. Internal maquettes enable the user to construct hierarchical
time structures.
The horizontal axis of the maquette represents time, in such a way that the position
and size of the graphic boxes can correspond to offset and duration values. The vertical
axis (called y) is a parameter that may be used in different ways in calculations.
Figure 10 is an example of a maquette in which 3 notes (C3, E3 and G3) are laid out
along the time axis. These notes are used as calculation parameter for a fourth musical
object (a chord sequence). The box that contains this chord sequence is linked to a patch.

Figure 11. A patch calculating a chord sequence in a maquette

The patch in figure 11 shows the programme that calculates the chord sequence. The
three notes are represented by input boxes. The repeat-n box takes a note at random
6 times from among the 3 notes, in order to make up a chord by adding major third and
fifth intervals.
After evaluation, the maquette can be played in linear sequence as a score.

287
The reader requiring more information on OpenMusic can consult the following
reference works:

[1] Agon, C.: ”OpenMusic : Un Langage Visuel pour la Composition Assistée par Or-
dinateur”. PhD. Thesis, Université Paris VI, 1998.
[2] Agon, C., Assayag, G., Laurson, M. and Rueda, C.: ”Computer Assisted Composi-
tion at Ircam : PatchWork & OpenMusic” Computer Music Journal 23(5), 1999.
[3] Agon, C. and Assayag, G.: ”Programmation Visuelle et Editeurs Musicaux pour la
Composition Assistée par Ordinateur” IHM’02, Poitiers, France, ACM Computer
Press, 2002.
[4] Bresson, J., Agon, C. and Assayag, G.: ”OpenMusic 5: A Cross-Platform Release of
the Computer-Assisted Composition Environment”. Proceedings of the 10th Brazil-
ian Symposiuml on Computer Music, Belo Horizonte, Brazil, 2005.

289
Dépôt légal : Janvier 2006
Imprimé en France