Sunteți pe pagina 1din 131

University of Iowa

Iowa Research Online


Theses and Dissertations

2010

Musical time and information theory entropy


Sarah Elizabeth Culpepper
University of Iowa

Copyright 2010 Sarah Elizabeth Culpepper


This dissertation is available at Iowa Research Online: http://ir.uiowa.edu/etd/659
Recommended Citation
Culpepper, Sarah Elizabeth. "Musical time and information theory entropy." MA (Master of Arts) thesis, University of Iowa, 2010.
http://ir.uiowa.edu/etd/659.

Follow this and additional works at: http://ir.uiowa.edu/etd


Part of the Music Commons

MUSICAL TIME AND INFORMATION THEORY ENTROPY

by
Sarah Elizabeth Culpepper

A thesis submitted in partial fulfillment


of the requirements for the Master of
Arts degree in Music
in the Graduate College of
The University of Iowa
July 2010
Thesis Supervisor: Assistant Professor Robert C. Cook

Graduate College
The University of Iowa
Iowa City, Iowa

CERTIFICATE OF APPROVAL
_______________________
MASTER'S THESIS
_______________
This is to certify that the Master's thesis of
Sarah Elizabeth Culpepper
has been approved by the Examining Committee
for the thesis requirement for the Master of Arts
degree in Music at the July 2010 graduation.
Thesis Committee: ___________________________________
Robert C. Cook, Thesis Supervisor
___________________________________
Nicole Biamonte
___________________________________
Jerry Cain

Is their wish so unique


To anthropomorphize the inanimate
With a love that masquerades as pure technique?
Donald Justice
Nostalgia of the Lakefronts

ii

TABLE OF CONTENTS
LIST OF TABLES ............................................................................................................. iv
LIST OF FIGURES ........................................................................................................... vi
CHAPTER
I.

INTRODUCTION ............................................................................................1

II.

INFORMATION THEORY ENTROPY..........................................................6

III.

EXISTING MUSIC-THEORETIC SCHOLARSHIP ON


INFORMATION THEORY ENTROPY........................................................21

IV.

ALPHABETS FOR ENTROPY-BASED ANALYSIS ..................................36


Interval Entropy ..............................................................................................36
CSEG Entropy ................................................................................................46
PC-Set Entropy ...............................................................................................61

V.

INFORMATION AND TIME ........................................................................67

VI.

ANALYSES ...................................................................................................80
Op. 16, no. 1: Christus factus est ................................................................80
Op. 5, no. 4 .....................................................................................................97

VII.

CONCLUSION.............................................................................................115

BIBLIOGRAPHY ............................................................................................................117

iii

LIST OF TABLES
Table
3.1. Pitch entropies from Youngblood .............................................................................21
4.1. Pitch entropies in Webern works, compared with Babbitt and Schubert .................36
4.2. Interval class entropies comparing serial and non-serial works ...............................38
4.3. Vertical and horizontal entropy on one serial and one non-serial work ...................39
4.4. Registrally-ordered interval class entropy in Webern and Babbitt ...........................40
4.5. Ordered directional interval class entropy in serial and non-serial works ................42
4.6. Interval entropy in Webern and Babbitt ...................................................................44
4.7. CSEG entropies for random and motivic strings ......................................................49
4.8. CSEG entropies, random string versus Webern, op. 5, no. 1 ...................................51
4.9. CSEGs, random string versus Webern, op. 5, no. 1 .................................................52
4.10. CSEG entropies, op. 5, no. 1, versus op. 18 .............................................................54
4.11. CSEG entropies, op. 5, no. 1, versus op. 15 .............................................................56
4.12. CSEG entropies for serial works ..............................................................................60
4.13. Pc-set entropies in op. 16 and op. 25 using discrete segmentation algorithm ..........62
4.14. Pc-set entropies in op. 16 and op. 25 using window algorithm ................................64
4.15. Vertical pc-set entropy in op. 16 and op. 25, no. 1 ...................................................65
6.1. Pitch class entropy in op. 16, no. 1 ...........................................................................87
6.2. Pitch class entropy in the vocal line of op. 16, no. 1. ...............................................88
6.3. CSEG entropies in op. 16, no. 1 ...............................................................................89
6.4. Interval class entropy in op. 16, no. 1 .......................................................................91
6.5. Discrete pc-set entropies in op. 16, no. 1 ..................................................................91
6.6. Pitch entropy in sections of op. 5, no. 4 ..................................................................104
6.7. Interval class entropies in op. 5, no. 4 ....................................................................105
6.8. Discrete pc-set entropies in op. 5, no. 4 ..................................................................106

iv

6.9. Pitch entropy in op. 5, no. 4, A and B.....................................................................108


6.10. Interval entropies in op. 5, no. 4, A and B ..............................................................109

LIST OF FIGURES
Figure
2.1. A corrupted tonal work .........................................................................................16
2.2. A corrupted contextual work .................................................................................16
3.1. 95% confidence intervals for Youngbloods entropy calculations ...........................23
3.2. Passage with pitch class entropy 2.52 .......................................................................28
3.3. Passage with pitch class entropy 2.52 .......................................................................28
4.1. Interval class entropies comparing serial and non-serial works ...............................38
4.2. Registrally-ordered interval class entropy in Webern and Babbitt ...........................41
4.3. Ordered directional interval class entropy in serial and non-serial works ...............42
4.4. Interval entropy in Webern and Babbitt ...................................................................44
4.5. A randomly-generated string of pitches....................................................................48
4.6. A motivic string of pitches........................................................................................49
4.7. CSEG entropies for random and motivic strings ......................................................50
4.8. CSEG entropies, random string versus Webern, op. 5, no. 1 ...................................51
4.9. CSEGs, random string versus Webern, op. 5, no. 1 .................................................53
4.10. CSEG entropies, op. 5, no. 1, versus op. 18 .............................................................55
4.11. CSEG entropies, op. 5, no. 1, versus op. 15 .............................................................56
4.12. Melody generated using the CSEG distribution of Webern, op. 5, no. 1 .................57
4.13. Melody generated using the CSEG distribution of a string of random pitches. .......58
4.14. CSEG entropies for serial works ..............................................................................60
4.15. Op. 27, no. 1, mm. 20-21 ..........................................................................................61
4.16. Pc-set entropies in op. 16 and op. 25 using discrete segmentation algorithm ..........63
4.17. Pc-set entropies in op. 16 and op. 25 using window algorithm ................................64
4.18. Vertical pc-set entropy in op. 16 and op. 25, no. 1 ...................................................65
6.1. Vertical ic1s in op. 16, no. 1 ....................................................................................83

vi

6.2. Pitch class entropy in op. 16, no. 1 ...........................................................................88


6.3. Pitch class entropy in the vocal line of op. 16, no. 1 ................................................89
6.4. CSEG entropies in op. 16, no. 1 ...............................................................................90
6.5. Interval class entropy in op. 16, no. 1 .......................................................................91
6.6. Discrete pc-set entropies in op. 16, no. 1 ..................................................................92
6.7. Lewins depiction of the three flyaway motives. ......................................................98
6.8. Pc-set analysis of op. 5, no. 4 .................................................................................100
6.9. Clampitts analysis of op. 5, no. 4, mm. 1-6 ...........................................................101
6.10. Pitch entropy in op. 5, no. 4 ....................................................................................104
6.11. Interval class entropy in op. 5, no. 4. ......................................................................105
6.12. Registrally-ordered interval class entropy in op. 5, no. 4 .......................................106
6.13. Discrete pc-set entropies in op. 5, no. 4 ..................................................................107
6.14. Pitch entropy in op. 5, no. 4, A and B.....................................................................108
6.15. Interval class entropy in op. 5, no. 4, A and B........................................................109
6.16. Registrally-ordered interval class entropy in op. 5, no. 4, A and B ........................110

vii

1
CHAPTER I
INTRODUCTION

In the conclusion of The Time of Music (1988), Jonathan Kramer gives two
anecdotes of his personal experiences with what he calls musical timelessness. The first
recalls a performance of the middle movement of Satie's Pages mystiques, a collection of
phrases repeated 840 times in succession:
For a brief time I felt myself getting bored, becoming imprisoned by a
hopelessly repetitious piece. Time was getting slower and slower,
threatening to stop. But then I found myself moving into a different
listening mode. I was entering the vertical time of the piece. My present
expanded, as I forgot about the music's past and future.... After what
seemed forty minutes I left. My watch told me that I had listened for
three hours. I felt exhilarated, refreshed, renewed.1

The second anecdote concerns the opposite condition, a happening dense enough to
induce sensory overload:
The production began at 7:00 p.m. The noise level was consistently high,
and the visual panorama was dizzying. I found myself, although performing,
focusing my attention on one layer, then another, and then various combinations
of layers.... After what seemed to be a couple of hours, everyone spontaneously
agreed that it was time to stop... I loaded my tape and slides into my car. Only
then did I glance at my watch. It was not yet 8:00! What had seemed like a twohour performance must have lasted under 25 minutes by the clock.2

Kramer attributes the disparity between these temporal experiences to the amount and
density of information each performance contained. Music that is predictable and easily
chunked, he argues, takes up less mental storage space and seems shorter than music
that is less predictable; Thus a two-minute pop tune will probably seem shorter than a
1

Jonathan Kramer, The Time of Music (New York: Schirmer, 1988): 379.

Ibid., 380.

2
two-minute Webern movement.3
The connection between musical predictability and perception of musical time is a
common one. Kramer characterizes musical temporalities as directed, multiply-directed,
and non-directed based on their movement towards a predictable goal.4 Re-ordered
temporal progressions, such as the misplaced closing gestures Levy finds in Haydn and
the evolving themes Hatten finds in Beethoven, draw power from their violation of
listener expectations.5 Although complicating factors abound the audience's familiarity
with a musical idiom; tendency to disengage from overly predictable works; how
comfortable the chairs are the existence of a connection between time and predictability
is clear.
This thesis examines the relationship between time and predictability through the
lens of information theory entropy. Just as traditional entropy speaks to the degree of
randomness in a system, information theory entropy speaks to the randomness of a
message or, alternately, to that message's predictability. Although information theory
entropy was initially developed to determine the most efficient way to encode a message
for radio transmission, it has since been adopted as an analytical tool by a variety of
fields, including linguistics, literary criticism, and music theory.
In particular, information theory entropy seems relevant to Webern's music.
Adorno refers to Webern's work as possessing a skeletal simplicity, a comparative
economy of musical materials that seems well-suited for analysis in terms of information

Ibid., 337.

Ibid., 16ff.

5
Janet Levy, Gesture, Form, and Syntax in Haydn's Music, in Haydn Studies:
Proceedings of the International Haydn Conference (New York: Norton, 1981), 355-362; Robert
Hatten, The Troping of Temporality in Music, in Approaches to Meaning in Music, ed. Byron
Almen and Edward Pearsall (Bloomington: Indiana University Press, 2006), 66ff.

3
theory in the sense that no pitch or gesture seems superfluous or reducible, as though its
omission would not have a marked effect on the passage, or as though it had only been
added to fill space before the beginning of the next phrase.6 (In Adornos words: Every
single note in Webern fairly crackles with meaning.7) Literary applications of
information theory entropy speak meaningfully to this economy as a feature of poetry, as
will be shown in a later chapter; I believe entropy can speak to these same qualities in
Webern's work.
Webern's music is also of interest to this project because of the relationship
between information content and the listener's perception of time, as will be discussed in
chapter 5. Certainly perception of time is salient to analysis of Weberns work. As
Stockhausen writes, If we realise, at the end of a piece of music... that we have 'lost all
sense of time', then we have in fact been experiencing time most strongly. This is how we
always react to Webern's music.8 In a different vein, Ligeti describes Webern's music as
the spatialization of time.9 Perception and analysis of time in Webern is, at the very least,
complicated, but entropy provides a useful metaphor for its description and a useful tool
for its examination.
In the 2009 article Number Fetishism, Vanessa Hawes criticizes music-theoretic
use of information theory as... well, as number fetishism: as a component of the claim

6
Theodor Adorno, The Aging of the New Music, in Essays on Music, ed. Richard
Leppert, trans. Susan Gillespie (Berkeley, Los Angeles: University of California Press, 2002),
187.
7
Theodor Adorno, Quasi una Fantasia: Essays on Modern Music, trans. Rodney
Livingstone (New York: Verso, 1998), 180.
8
Karlheinz Stockhausen, Structure and Experiential Time, Die Reihe 2 (Bryn Mawr, PA:
Presser, 1959), 65.
9
Gyorgy Ligeti, Metamorphoses of Musical Form, Die Reihe 7 (Bryn Mawr, PA:
Presser, 1965), 16.

4
that music theorists can consider themselves scientists who refute or uphold hypotheses
based on empirical evidence, a notion she depicts as quaint and outdated.10 Indeed, early
uses of information theory often relied upon questionable assumptions, as Hessert (1971)
claims, and were often divorced from diachronic perception of music.11 Nevertheless,
insofar as information theory entropy measures predictability a very salient factor in
diachronic perception of music it can be a relevant lens for the examination of musical
time.
Using information theory to quantify subjective musical temporality would be
questionable indeed, but using information theory to analyze and discuss temporality
seems much less problematic. Writing about traditional entropy, Eddington clarifies the
situation:
Suppose that we were asked to arrange the following in two categories
distance, mass, electric force, entropy, beauty, melody....
I think there are the strongest grounds for placing entropy alongside
beauty and melody, and not with the first three. Entropy is only found
when the parts are viewed in association, and it is by viewing or hearing
the parts in association that beauty and melody are discerned. All three
are features of arrangement. It is a pregnant thought that one of these
three associates should be able to figure as a commonplace quantity of
science. The reason why this stranger can pass itself off among the
aborigines of the physical world is that it is able to speak their language,
viz., the language of arithmetic.12

Entropy is discussed in terms of number but is not the fetishism of number; rather, it is a
10
Vanessa Hawes, Number Fetishism: The History of the Use of Information Theory as a
Tool for Musical Analysis, in Music's Intellectual History 2009, ed. Zdravko Blazekovic and
Barbara Dobbs Mackenzie (New York: RILM, 2009), esp. 836-838.
11
Norman Hessert, The Use of Information Theory in Musical Analysis (Ph.D diss.,
Indiana University, 1971).
12
A. Eddington, The Nature of the Physical World (Ann Arbor: University of Michigan
Press, 1935), 105.

5
powerful and elegant principle that can be expressed quantitatively. Similarly,
information theory entropy need not be a formula divorced from musical experience, but
can instead be a analytical tool and metaphor for the discussion of something deeply
experiential and even as Meyer (1957) claims a way to approach the question of
musical meaning.13
This thesis begins with an explication of information theory entropy (chapter 2)
and a history of its use in music theory (chapter 3). In chapter 4, a variety of alternative
approaches to entropy are developed, including entropy calculations based on CSEGs and
pc-sets (as opposed to single pitch classes). Chapter 5 makes a more in-depth argument
for the relationship between information theory entropy and time, recasting analyses of
temporality in Webern in terms of entropy. Finally, in chapter 6, information theory
entropy will be used to analyze time in the first of the Fnf Canons, op. 16, and the fourth
of the Fnf Stze, op. 5 two movements in which form is created by perceptible shifts
among differing depictions of temporality, shifts prompted by varying degrees of
predictability in a variety of musical domains.

13
Leonard Meyer, Meaning in Music and Information Theory, Journal of Aesthetics and
Art Criticism 15, no. 4 (1957): 412-424.

6
CHAPTER II
INFORMATION THEORY ENTROPY

Information theory entropy is based on the idea that in most alphabets, some
letters communicate more information than other letters do, because they occur less
frequently. If a word has been corrupted during transmission and all that remains is q _ _
_ k, the recipient can easily guess what the original word was, since there are very few
words that contain both a q and a k. By contrast, if all that remains of the word is _ _ i c
_, the original word is much more difficult to guess. Since q and k are uncommon, they
communicate more information about the original message than more common letters
can.14
In general, the more unequal the frequencies of letters in an alphabet are, the
easier it is to determine what letters have been corrupted. If an alphabet only has two
letters, A and B, but the former occurs 90% of the time and the latter occurs 10% of the
time, the message recipient has an excellent chance of guessing any letters that have been
corrupted (since there is a 90% chance any given letter will be an A). By contrast, if A
and B appear 50% of the time, our ability to guess a missing letter is diminished.
From the perspective of a person sending a telegram, the former language is very
inefficient. Assume, for simplicity, any message in this language must contain exactly
90% As and 10% Bs (although in a real language, these would be averages). If the

14

Some more in-depth sources on information theory entropy:

A. Khinchin, Mathematical Foundations of Information Theory (New York: Dover,


1957); Abraham Moles, Information Theory and Esthetic Perception, trans. Joel Cohen (Urbana
and London: University of Illinois Press, 1966); Lawrence Rosenfield, Aristotle and Information
Theory (Paris, The Hague: Mouton, 1971); Claude Shannon, A Mathematical Theory of
Communication, Bell System Technical Journal 27 (1948), 379-423; Claude Shannon and
Warren Weaver, A Mathematical Model of Communication (Urbana: University of Illinois Press,
1949). Information in this chapter is drawn heavily from these sources, as well as from the musictheoretic sources cited in Chapter 3.

7
transmitter is limited to ten characters, there are exactly ten words s/he can send:
AAAAAAAAAB, AAAAAAAABA, AAAAAAABAAA, and so on. The letter A is so
common that it is practically meaningless; only the position of the less common letter
differentiates between words, but it occurs very rarely. By contrast, in a language that is
50% A and 50% B, the transmitter would have 2^10 or 1024 word choices. By creating
an alphabet in which all letters occur with the same frequency, the efficiency of
transmission is maximized.
Of course, in addition to being more efficient, the latter alphabet is less resistant
to corruption. Ideally, one must find a balance between the most efficient language
possible and the most robust language possible, to be sure that the message arrives to its
recipient intact but without wasting time or resources during transmission. Finding this
balance generally for the purpose of data compression or encoding was one of the
first goals of the field of information theory, pioneered by Bell Labs engineer Claude
Shannon in the late 1940s.
The inequality of the amount of information contributed by each letter in an
alphabet is called the Shannon entropy of that alphabet. If Shannon entropy is low, the
language is inefficient but robust; a few letters occur very frequently and the rest are
uncommon. If Shannon entropy is high, the language is efficient; each letter occurs with
roughly the same frequency and therefore each letter conveys the most information
possible.
The Shannon entropy of a message or an alphabet is given by the following
formula:

Here, p(x) is the probability that a given event occurs; p(x=6) denotes the probability that

8
a randomly selected pitch will be an F#, for example.
The example of bits illustrates the purpose of the logarithm in this formula. Each
bit presents two choices; given six bits, the number of combinations that can be
communicated is two to the sixth power. The entropy formula can seen as taking the
number of possible choices (here, expressed in terms of probability) and returning the
number of bits that would be necessary to communicate that much information.15
(Log base two is necessary to express these results in terms of bits. Another log base
would create meaningful data if used consistently, but these data would be in terms of
other units of measurement.) Effectively, the use of logarithms in this formula ensures
that the highest entropy is created when each possible outcome has an equal probability
of occurring, and that the lowest entropy is created when one event has a very high
probability of occurring. Consider an alphabet that has one letter, A, that occurs 100% of
the time. The entropy for this language is

that is, since we are absolutely certain every letter will be an A, the language has an
uncertainty of zero and an entropy of zero. The closer any probability gets to 1, the
smaller the language's entropy becomes. For example, if this language had three letters
instead, in which A occurred 98% of the time, and B and C each occurred 1% of the time,
the entropy of the language would be

15

See Khinchin or Shannon and Weaver for more information.

9
The logarithmic expression makes the contribution of the first term very small, whereas
the small probabilities make the contributions of the second and third terms very small as
well. By contrast, if each option occurs with roughly equal frequency, the entropy of the
language is

which is the highest possible entropy for an alphabet with three letters. Of course, the
more letters in an alphabet, the higher the maximal entropy becomes. If this same
equally-weighted alphabet had eight letters, its entropy would be

An alphabet with twenty-six letters has a maximal entropy of 4.7; an alphabet with a
hundred letters has a maximal entropy of 6.64.
It is clear from these examples that entropy is most useful for comparisons. The
claim that an alphabet with a hundred letters has a maximal entropy of 6.64 is not terribly
meaningful on its own; it only takes on meaning when paired with the statement that an
alphabet with three letters has a maximal entropy of 1.58, or with other entropy
calculations from hundred-letter alphabets.
To allow more meaningful comparisons between entropies of alphabets with
different cardinalities, we introduce the concept of relative entropy, which expresses
entropy values (as computed above) as a percentage of the maximal possible entropy for
an alphabet of that cardinality. For example, the relative entropies of the cardinality three
and cardinality eight alphabets discussed above are

10

and

respectively. Thus, we can think of these two alphabets as having equivalent entropies,
even if their absolute entropies are not equal.
Relative entropy also allows entropy calculations to reflect unused letters in a
passage. Intuitively, a passage of English text that uses only thirteen letters should not
have the same entropy as a passage of Hawaiian. One imagines the former would seem
more stilted, more restricted than the latter, since a listener would hear it in the context of
a twenty-six letter alphabet, rather than a thirteen-letter alphabet. Similarly, a piece that
only uses the pitches C, C#, Eb, G#, A, A#, and B with given frequencies is very different
from a passage of chant that uses each of its seven tones with the same frequencies as the
above piece. While it is likely that the former piece will be heard as using a restricted
subset of a twelve-pitch alphabet, the latter piece exhausts its alphabet and would not be
heard as restricted in its materials in the same way as the former. The traditional entropy
formula is unable to reflect this distinction, because any unused letters carry with them a
probability of 0, effectively canceling out any entropic contribution from those letters, but
these unused letters are relevant to the computation of relative entropy through their
conclusion in the maximal entropy for an alphabet of a given cardinality.
Nevertheless, the use of relative entropy requires caution. A piece of music that

11
uses three pitches with equal frequency is much more predictable, mathematically and
aurally, than a piece of music that uses twelve pitches with equal frequencies, even
though their relative frequencies are equal. In other words, although relative entropy
allows for comparison between alphabets of different cardinalities, such a comparison
must always be considered alongside the alphabets respective absolute entropies. In this
paper, relative entropy will only be invoked in the presence of corresponding absolute
entropy figures or some sort of intuitive justification for hearing these alphabets as
perceptually similar.
It is also clear from these examples that the entropy of a message at least,
entropy computed on the literal letters of an alphabet is indepedent of the meaning of
that message. Entropy only reflects characteristics of the language in which that message
is written or encoded. However, the meaning of a message may become relevant if the
'alphabet' in question is not a literal alphabet. For years, literary critics especially
modernists and post-modernists, and in particular those interested in the work of Thomas
Pynchon have completed information theory analyses of texts using words or images,
instead of literal letters, as the letters of an alphabet. In this case, the most commonly
occurring letters are connective words like articles and prepositions. Consider, for
example, a corrupted block of text from F. Scott Fitzgerald's The Great Gatsby, from
which every eighth word has been removed:
When we pulled out into the winter and the real snow, our snow, began
stretch out beside us and twinkle against windows, and the dim lights
of small Wisconsin echoed by, a sharp wild brace came into the air. We
drew in deep of it as we walked back from through the cold vestibules,
unutterably aware of identity with this country for one hour before we
melted indistinguishably into it again.
Although the result is disjointed in places, it is certainly still intelligible; in places the
reader cannot tell the message has been corrupted at all. Every image found in this

12
excerpt is repeated; if the word 'winter' were corrupted, 'snow' and 'cold' would still
convey its meaning. Additionally, the passage contains many connective words and
when these words are missing (as in, our snow began stretch out beside us) the blanks
can be filled in easily. We conclude that this passage has low word-based entropy,
regardless of any entropy figures computed on the basis of individual letter frequency.
For comparison, an excerpt from Flann O'Brien's At Swim-Two-Birds (considered
the first Irish post-modern novel) has been similarly corrupted below.
I will relate, said Finn. Till a man accomplished twelve books of poetry,
the same is not for want of poetry but is forced away. man is taken till a
black hole is in the world to the depth of his oxters and he put into it to
gaze it with his lonely head and nothing to but his shield and a stick of.
Then must nine warriors fly their at him, one with the other and together.
From this, we can gather that we are listening to a narrator named Finn; word repetition
clues us in that poetry and war are somehow involved, but there is little else we can say
about this passage. This same lack of repetition makes the original, non-corrupted
passage more difficult to understand than the non-corrupted Fitzgerald.
I will relate, said Finn. Till a man has accomplished twelve books of poetry,
the same is not taken for want of poetry but is forced away. No man is taken
till a black hole is hollowed in the world to the depth of his two oxters and
he put into it to gaze from it with his lonely head and nothing to him but his
shield and a stick of hazel. Then must nine warriors fly their spears at him,
one with the other and together.

In examining the original passage, we are in fact examining different sources of


corruption. The Fitzgerald is robust against the corruption of readers lacking context, or
readers being sleepy; in the presence of these forms of corruption the passage is still
readable. The O'Brien is much less robust by comparison. We conclude the passage has
higher entropy.

13
Alternately, we can conclude that the O'Brien passage is more efficient than the
Fitzgerald, since each individual word communicates more information. If the reader can
easily guess the meaning of a missing word, as in the Fitzgerald, then that word has a
very low information content; with these words removed, the passage becomes less
elegant but not much less intelligible. This is the same quality that makes this passage
easy to summarize. Since removing words from the O'Brien limits the reader's ability to
comprehend the passage, we can conclude that the missing words had a higher
information content that overall, there are fewer redundant or repeated words, and that
therefore, the O'Brien is a more efficient communication.
Finally, we examine a passage from Todtnauberg by Paul Celan.16
Arnica, Eyebright, the
drink from the with the
star-die on top,
in the
into the book
whose name did it in
before mine?
the line written into
this about
a hope, today,
for a thinker's
(uncoming)
word
in the
The result is nearly unintelligible; the reader cannot guess the original narrator, subject,
or purpose of this passage. What remains is interpretable, certainly, but the reader cannot

16
Although this passage is shorter than the others, the same percentage of words has been
removed in each case.

14
be certain of the original text based on this except. Consequently, this passage has high
entropy.
As expected, lack of repeated words and lack of connective words contribute to
higher entropy. Shared meaning contributes to lower entropy as well, as seen in the
Fitzgerald example dealing with 'snow,' 'winter,' and 'cold.' From these examples, though,
we can also see that clear syntactical structures reduce entropy. If the reader can perceive
the sentence structure underlying We drew in deep ____ of it as we walked back from
____ through the cold vestibules, the reader can make more educated guesses as to what
the missing words could be. The second missing word appears to be some sort of place;
the first missing word is a noun that can be an object of the verb 'to draw in,' so perhaps
the missing word is 'breaths' or 'gasps' or something along those lines. The general import
of the sentence is still clear. Similarly, in the O'Brien sentence Then must nine warriors
fly their ____ at him, as long as the reader can parse that warriors are throwing things at
an unhappy target, the meaning of the sentence is clear.
By contrast, poetry especially the works of Paul Celan is characterized by
economy of words and imagery, in that every word contributes a great deal of meaning to
a passage. This is why the Celan example is the least intelligible of the above: there are
few redundant words, and the associations between words are specifically designed to be
unexpected and novel. In other words, each word is intended to convey the greatest
possible amount of information.
One could conceptualize this new, more meaning-sensitive interpretation of
entropy as occurring on a higher level than entropy computed based on literal letters of an
alphabet. If this Fitzgerald sample were encoded in a different alphabet if it were
written in binary, or encrypted for secure transmission without changing its vocabulary,
its low-level, alphabet-based entropy would be quite different but its higher-level, word-

15
based entropy would be the same. To achieve a word-based equivalent to encryption, one
would need a paraphrase of this text by another author, or a similar text that
communicates the same images (snowfall; evening; solitude) or the same themes
(introspection; nostalgia; the notion that a persons actions and mindsets are influenced
by that persons home17) using thriftier vocabulary.
These corrupted blocks of text can be seen as analogous to hearing music in a
static-filled radio broadcast. Listening to a Haydn string quartet in such a broadcast, one
would still be able to identify the key, the time signature, and the instrumentation; one
could make an educated guess as to which movement the quartet was playing, and
probably one could even hum the missing notes. By contrast, listening to such a broadcast
of the Webern Concerto, op. 24, one might not even be able to determine the
orchestration of the piece, let alone guess the missing notes. One can imagine a similar
corruption of the original musical signal being created by a poor ensemble; in this
situation, the Haydn can be considered to have a higher entropy because ensemble
mistakes, whether wrong notes or dynamic mismatches or harmonic misalignments, are
generally much more recognizable than the corresponding mistakes would be in the
Webern. Because the listener is (usually) able to form more confident predictions for
upcoming events in the Haydn, violations of these predictions (including mistakes) are
more striking.18
Alternately, consider the (comically) corrupted piece of music shown in Figure
2.1.
17
From the next paragraph: I see now that this has been a story of the West, after all
Tom and Gatsby, Daisy and Jordan and I, were all Westerners, and perhaps we possessed some
deficiency in common which made us subtly unadaptable to Eastern life.
18
This is a generalization, of course. Many Webern compositions can be considered to
have low entropy in terms of dynamics in which case a mistake in terms of dynamics would be
immediately recognizable as such.

16

Figure 2.1: A corrupted tonal work

Despite the corruption, the identity of this piece is readily apparent. Even a listener who
had never heard this piece before could make a reasonable guess at every missing note,
based on typical harmonic progressions, repetition, and motive. By comparison, a
similarly corrupted, non-tonal work, shown in Figure 2.2, is less easy to identify.

Figure 2.2: A corrupted contextual work

17
A listener already familiar with the piece might be able to identify this as the third
movement of Webern's Variations for Piano, op. 27, but a listener unfamiliar with the
piece would not even be able to guess which of the corrupted objects were pitches and
which were rests. A listener who expects a serial work based on a derived row may be
able to fill in the blanks surmising in retrospect that the first missing pitch must be a
Bb, creating the ordered interval series <-4 -1> to match the <+4 +1> of the inverted row
form beginning in m. 5 but probably not on first hearing without a score, and certainly
not as readily as in the Bach. In other words, the second work is more efficient, more
condensed. Because the missing pitches cannot be determined easily based on the
surrounding material, these pitches carry a high information content.
Other potential sources of corruption beyond literal transmission factors like
radio static, a corrupted score, or poor acoustics, and figurative transmission factors like
poor performance raise larger questions about the nature of entropy in music. One can
interpret an imprecise piano reduction of an orchestral work as a corruption of that
orchestral work, in roughly the same way one could consider a poorly executed English
translation of a German text a corruption of the original. However, if one considers
corruption as something that can happen within the music itself, as opposed to something
imposed upon the music by external factors (things like radio static or performers
mistakes), it becomes difficult to decide which musical features are the original signal
and which are corruption: is a theme an original signal and its variations corruption? is
the original A section of a ternary form an original signal and its altered A corruption?
Since entropy is defined as a messages ability to resist corruption, what can entropy be
said to measure in these cases? It may be meaningful to say that a theme resists
variation or that a melody resists ornamentation, if the former is not very memorable
or if the latter is already very elaborate, but these states may or may not coincide with

18
entropy figures generated for these passages (in that a very elaborate melody may still be
very predictable and therefore have a low entropy, for example).
More to the point, this approach makes questionable implications about the nature
of musical meaning in such a work. Is it reasonable to consider a Stokowski transcription
as necessarily subsidiary to the work it transcribes, as opposed to an independent work in
its own right even if the aesthetic of the transcription is meaningfully different from the
aesthetic of the original? If so, is it still reasonable to consider a Webern transcription of
Bach, or for that matter a Wendy Carlos performance of Bach, in the same light? In cases
of music not governed by a score, which performance is the canonical performance and
which is the corrupted performance?
Meyer also raises the issue of cultural noise: corruption that occurs in
transmission as the result of a time-lag between the habit responses which the audience
actually possess and those which the more adventurous composer envisages for it.19 This
can be understood as avant-garde music whose language an audience has not yet
internalized, or as pre-modern music heard differently by modern or post-modern
audiences. In this case, the music is not corrupted by any external factors, but the
audience's perception is; the issue is not signal transmission, but signal reception.
It seems most reasonable, for the purposes of this project, to consider each score
as an uncorrupted signal, accepting publisher and performer mistakes as corruption but
accepting changes that arise through arrangement as part of an original signal. (That is to
say, this project accepts Shelleys philosophy of translation: that a translation is or should
be a new artwork unto itself rather than a derivative work dependent upon an original.20)

19

Meyer, Meaning in Music and Information Theory, 420.

20
Percy Shelley, A Defence of Poetry and Other Essays (1840; Project Gutenberg, 2005),
http://www.gutenberg.org/etext/5428. See Part I.

19
The issue of cultural noise is important, because it is important in every work of analysis;
an information-theoretic analysis cannot assume an audience will hear a work the way an
ideal listener would, but nor can any other kind of analysis that wishes to reflect a
practical perceptual reality.
In any case, the factors that lead to high or low entropy in a musical example are
the same as in the excerpted Fitzgerald, O'Brien, and Celan texts. If we analyzed these
texts using literal letters as an alphabet, we would be able to identify the texts as English,
and we would probably be able to make general statements about the author's style for
example, one could determine the average entropy for a passage saturated with Latinate
vocabulary and the average entropy for Anglo-Saxon vocabulary, based on which letters
occur the most frequently and which letters do not occur at all (such as w and j in Latin),
and from this make predictions about the loftiness or folksiness of the author's writing
style. Similarly, if we accept pitch as an alphabet, we can make predictions about how
diatonic or how chromatic a musical excerpt is, based on which pitches occur the most
frequently. However, loftiness of vocabulary does not result from avoiding the letters w
and j, any more than tonality results from using scale degrees 1 and 5 frequently. Low
entropy (on a pitch-by-pitch basis) is generally symptomatic of tonality, but does not
speak to the harmonic progressions that bring tonality into being.
It may be inappropriate to claim that entropy created by pitches is directly
analogous to low-level, letter-based entropy in text. In some contexts a pitch may be
operating as a part of a word (for example, a single pitch within an arpeggiation), while in
other contexts that pitch may be a word unto itself. For this reason, pitch-based entropy
may be more relevant to musical analysis than letter-based entropy is to literary analysis.
Nevertheless, it seems reasonable to claim that the analysis of more complex musical
alphabets may strengthen the link between musical style or predictability and entropy

20
calculations, creating something more broadly comparable to word-based entropy in
text. In both music and in text, entropy (as perceived intuitively by the listener or reader)
is lowered by the presence of connective material (arpeggiations, passing tones,
parsimonious voice leading), repetition (motivic material, canons, imitation), and larger
structures (a T-P-D-T phrase structure, a serial row). If alphabets are built that can
address the existence or nonexistence of these elements and structures, a more intuitive
interpretation of entropy will result.
Generally speaking, entropy is less of a commentary on musical meaning than it is
a commentary on musical style, and the degree of redundancy or predictability with
which that meaning is communicated. With that said, though, it is impossible to divorce
the two concepts, just as the meaning of a text cannot be separated from the words with
which it is conveyed or, arguably, from the audience's interpretative creation of
meaning. As Meyer writes,
Both meaning and information are thus related through probability to
uncertainty. For the weaker the probability of a particular consequent
in any message, the greater the uncertainty (and information) involved
in the antecedent-consequent relationship.21
Earlier, Meyer highlights this same relationship as the source of musical meaning:
Musical meaning arises when an antecedent situation, requiring an estimate as to the
probable modes of pattern continuation, produces uncertainty as to the temporal-tonal
nature of the expected consequent.22 Although this relationship has not always been the
focus of music theory's use of information theory entropy, Meyer's comments imply that
information theory entropy has potential insight into musical meaning as well as musical
style.
21

Meyer, Meaning in Music and Information Theory, 416

22

Ibid.

21
CHAPTER III
EXISTING MUSIC-THEORETIC SCHOLARSHIP ON INFORMATION THEORY
ENTROPY

Use of entropy in music theory is generally thought to begin with Youngblood's


1958 article Style as Information, in which entropies are calculated for eight songs
from Schuberts Die Schne Mllerin, six arias from Mendelssohn's St. Paul, and six
songs from Schumanns Frauen-Liebe und Leben. Only melodies in major keys are
considered. In each case, a modified system of scale degrees is used as an alphabet; 1
indicates tonic, 2 indicated a raised tonic or a lowered subtonic, and so forth up to 12. His
zero-order results for these composers can be summarized as follows:

Composer

Zero-order Entropy

Zero-order Relative Entropy

Mendelssohn

3.03

84.60%

Schumann

3.05

85.00%

Schubert

3.13

87.00%

Table 3.1: Pitch entropies from Youngblood

Youngblood finds the Mendelssohn sample to have the lowest entropy (or, alternately, the
greatest redundancy/inefficiency) of the three, although he finds all three composers to
have very similar entropies overall.23
Youngblood also compares the entropy values for these composers to the

23
35.

Joseph Youngblood, Style as Information, Journal of Music Theory 2, no. 1 (1958): 24-

22
entropies of a collection of randomly chosen Mode I chants. When these chants are
considered as representatives of a seven-note alphabet, they are found to have a much
higher relative entropy than the lieder and arias (HR=96.7%). Youngblood attributes this
to the chants' more regular use of non-final and non-tenor tones, as compared to the
lieder's marked preference for diatonic pitches over chromatic ones. Of course, when
considered as representative of a twelve-note alphabet, the chant selections have a lower
entropy than the works of all three later composers (H=2.72, HR=76%).24
Knopoff and Hutchinson question Youngblood's non-chant results for statistical
reasons, claiming that Youngblood's sample size is too small for the differences he finds
in Mendelssohn's and Schuberts entropies to be significant. In support for this argument,
they construct confidence intervals for Youngblood's data, shown in Figure 3.1. When
Youngblood says the entropy of his Mendelssohn sample is 3.03, he makes the implicit
claim that this sample is representative of all of Mendelssohn that if an analyst were to
compute a total entropy for all extant Mendelssohn works, that result would be fairly
close to Youngblood's. Confidence intervals measure how certain we are that the sample's
entropy is comparable to that of Mendelssohns complete body of work. Figure 3.1 shows
Knopoff and Hutchinsons confidence intervals for Youngblood's entropy calculations.
In this example, Knopoff and Hutchinson state with 95% confidence that
Mendelssohn's entropy falls between 2.895 and 3.183, and that Schubert's entropy falls
between 3.016 and 3.244. Since the confidence intervals overlap, one cannot conclude
based on this data that Schubert's and Mendelssohn's total entropies differ; it is entirely
possible, based on this data, that Schubert's total entropy is in fact lower than

24
Although most listeners probably hear chant in terms of a seven-note alphabet, one can
imagine factors that would lead the listener to hear chant in terms of a twelve-note alphabet, such
as placement of the chant between or within tonal works (as in the fragment of chant that
concludes Bruckners Os Justi), or a listeners lack of familiarity with the repertoire.

23
Mendelssohn's, or that the two are equal.25

Figure 3.1: 95% confidence intervals for Youngblood's entropy calculations

For simple random samples, confidence intervals are generally computed using
some variant of the following formula

25
Leon Knopoff and William Hutchinson, Entropy as a Measure of Style: The Influence of
Sample Length, Journal of Music Theory 27, no. 1 (1983): 75-97.

24

where x-bar is the mean of the sample (in this case, the sample's entropy), s is the
sample's standard deviation, n is the sample size, and is the quantity we wish to
establish: the predicted entropy for the musical style or composer in question.26 As is
clear from this formula, there are two factors that influence the size of a confidence
interval: sample size (Knopoff and Hutchinson's focus) and sample variance (the focus of
a 1990 Snyder article). The former is reasonably intuitive; a very small sample could be a
fluke, but if a large sample of Mendelssohn's work supports the conclusion that his total
entropy is 3.03, then it seems much more probable that Mendelssohn's overall entropy
really is close to 3.03. Snyder adds that variance within the sample can also make us
more or less confident. If an analyst looks at four Mendelssohn samples of comparable
length and finds them to have entropy values of 2, 4.98, 3.6, and 1.4, that analyst would
have difficulty predicting Mendelssohn's total entropy, since the samples are so disparate.
By contrast, if the first four samples came back as 3.01, 3.08, 2.97, and 2.93 the
conclusions drawn about these data would seem much more reasonable, even if the
sample were smaller.27
In addition to statistical concerns, Snyder and Knopoff and Hutchinson highlight
26
The multiplier 1.96 specifies a 95% confidence interval that is, if we take 100 samples,
the means of 95 of the samples will fall within this interval. Multiplying by 1.645 instead would
return a 90% confidence interval.
This formula is provided only to illustrate the concept; confidence intervals in this paper
were calculated for binomial proportions, taking propagation of error into account. See Appendix
A of Knopoff and Hutchinson, 1993, for more information.
27
John Snyder, Entropy as a Measure of Musical Style: The Influence of A Priori
Assumptions, Music Theory Spectrum 12, no. 1 (1990): 121-160.

25
several methodological problems, the clearest of which is modulation. In a piece that
modulates from C minor to Eb major, one would expect the pitches C, G, Eb, and Bb to
occur with the greatest frequency which increases that piece's entropy quite sharply,
since such a piece contains four pitches that occur frequently, instead of the two such
pitches found in a nonmodulatory work. (This logic could be expanded to include scale
degree 7 of each key as well as scale degree 5, but the result would be the same.) In a
piece that modulates from C minor to G major, the shift to a new diatonic collection
would result in a higher entropy, as well. Youngblood's analyses make no accommodation
for this; although he computes entropies based on a scale-degree system, these scale
degrees are never adjusted for modulations. He notes that this lack of regard for
modulation may have disguised the differences between his Schumann and Schubert
samples, noting that (at least in these samples) Schubert's chromatic pitches tended to
arise from modulation, whereas chromatic pitches in his Schumann samples tended to be
more ornamental very different phenomena that lead to similar results.28
In their analyses, Knopoff and Hutchinson compensate for modulations by
normalizing all passages to C major or A minor, although this normalization is only
initiated by changes in written key signature. Snyder finds this disregarding of implied
modulations quite problematic, as well as the implied prioritization of la-minor. Since a
modulation between relative keys is never accompanied by a change in key signature, a
piece that modulates from, say, F major to D minor would register as having higher
entropy insofar as the latter tonal area deviates from its la-tonic. Snyder also questions
the assumption that modulations should be normalized away, arguing that a piece that
begins and ends in distantly related keys ought to have a higher entropy than a piece that

28

Youngblood, 78.

26
begins and ends in the same key.29 Alternately, one could argue that a piece that
modulates from I to V ought to have a lower entropy than a piece that modulates from,
say, I to bII that the predictability (and perhaps even the smoothness) of a modulation
ought to be a factor in that piece's entropy calculations.
Unfortunately, there are few solutions to the problem of accurate representation of
modulations in entropy. Arguably, Youngblood's system does associate distant keys with
higher entropies, since a modulation from C major to G major would contribute much
less to a piece's entropy than a modulation from C major to Ab major would, based on the
number of pitches held in common between the two respective diatonic collections. One
could imagine combining this method with a weighting system, in which pitches
belonging to passages in non-tonic keys contribute less to the piece's total entropy than
pitches in the tonic key do. Ideally, these weights would be determined in part by the
amount of time spent in the new key (as the listener's ability to remember the home key
diminishes over time), but any such system would almost certainly be criticized as
arbitrary.
The debate over how best to represent modulation in entropy calculations for
tonal repertoire highlights the concern that underlies most if not all entropy-based
analyses: what alphabet best reflects listeners' perceptions of musical language?
Uninterpreted pitch or pitch class is rejected as an alphabet because it is a poor reflection
of listeners' interpretative hearings, since it shows no connection between the roles of C
and G in C minor and Eb and Bb in Eb major. Similarly, when Snyder adopts a twentyeight-letter alphabet in which enharmonic spellings are taken as separate up to double

29
Snyder, 126-128. While the average listener may not realize that a piece has ended in C#
major instead of C major, this same listener would probably notice if the piece begins in C major
and ends in F# minor, if only for reasons of mode and register a distinction that cannot be made
within this system.

27
flats (of scale degrees 7, 3, 6, and 2) and double sharps (of scale degrees 1, 4, and 5), his
motivation is the notion that listeners hear F and E# as distinct pitches in certain contexts,
rather than the creation of an exhaustive system.
The variety of options available to analysts (even in terms of pitch alone) speaks
to the expressive potential of these alphabets, since they can be altered to best reflect
listeners' perceptions of any given repertoire. This same flexibility can limit the analyst's
ability to compare samples from sufficiently different styles, though. It would seem
unfair to compare Wagner's entropy within a twenty-eight-note system with late serial
Schoenberg's, for example, since Schoenberg's disuse of double sharps does not speak to
any increased predictability in his music as compared to Wagner's, nor would it be fair to
say the listener finds Schoenberg's style more constricted because these letters are
omitted. One of the unstated goals of such analysis, then, is the selection of an alphabet
that is sensitive to perceptual concerns for specific repertoires but also general enough in
its applicability that its use on music from other repertoires seems reasonable.
This challenge is even greater for contextual music. The most common and most
universally applicable alphabet, either pitch names or scale degrees accepting octave and
enharmonic equivalence, is all but useless for serial music or any sort of music that
exhausts the aggregate regularly. Any such piece will have maximal entropy for that
alphabet cardinality, regardless of whether the piece is based on a derived row or an allinterval row and, indeed, regardless of whether or not the piece is atonal at all. This
entropic equality implies that Webern's Variations for Piano is exactly as predictable as
Boulez's Piano Sonata no. 2, which would be in turn just as predictable as the first few
bars of Coltrane's Giant Steps an unintuitive claim to say the least.
One potential solution is the incorporation of higher-order entropies, often
accomplished through the guise of Markov chains. Such constructs would allow the

28
analyst to look for patterns in the ordering of pitches, rather than relying on their
frequency alone. With a simple pitch alphabet, Markov chains could not differentiate
between serial rows, but could at least distinguish between a serial piece and a non-serial
piece that happens to use each pitch equally. Higher-order constructs have even clearer
applicability in entropic analyses of tonal music, since they measure predictability of
succession something of particular importance if entropy is taken to be a measure of
tonality, since entropy on its own is order-blind. Thus, from the perspective of zero-order
entropy, the progressions in Figures 3.2 and 3.3 are exactly the same, although certainly
one is more predictable than the other within a tonal paradigm, and certainly one is more
tonal than the other. By contrast, Markov chains could differentiate between these two
strings easily.

Figure 3.2: Passage with pitch-class entropy 2.52

Figure 3.3: Passage with pitch-class entropy 2.52

29
In his 1958 analyses, Youngblood computes entropies on first-order combinations,
in addition to entropies based on zero-order data. That is, rather than accepting C, D, and
E as the most basic units of music, Youngblood accepts C followed by C, C followed by
C#, C followed by D, and so forth as individual letters, creating an alphabet with 144
letters. However, Hessert notes that the continued effectiveness of this strategy is limited;
an alphabet built from consecutive pitch pairs is almost reasonable at 144 letters, but
three consecutive letters lead to 1728 possibilities, which leads to unwieldy
calculations.30 One can only imagine the complexity of a higher-order alphabet that does
not accept octave equivalence.
Hessert advocates the use of an alphabet based on intervals as a potential solution
to this problem, since a computation based on intervals is effectively first-order without
requiring any first-order computations. He also notes that an alphabet based on intervals
avoids the issue of modulation quite nicely, while reflecting motivic content more
accurately than pitch-based analysis can and potentially allowing for more meaningful
comparisons across disparate repertoire.31 Rhodes advocates a similar solution: an
alphabet that combines each pitch with its preceding interval.32 Potentially, such an
alphabet would allow the analyst to distinguish between typical and non-typical
resolutions of dissonant tones; a piece in which any scale degree can be left by any
interval is probably less tonal than a piece in which certain scale degrees (4 and 7, e.g.)
can usually only be left by certain intervals (down by step and up by step, respectively).
Of course, in the eyes of this computation, a composer who always resolves 7 to b5

30

Hessert, 16ff.

31

Ibid., 43-44.

32
James Rhodes, Musical Data as Information: A General-Systems Perspective on Musical
Analysis, Computing in Musicology 10 (1995-1996): 165-180.

30
would be no more or less predictable than a composer who always resolves 7 to 1, or
even a composer whose 7s can resolve anywhere but whose b3s always resolve to b6.
Through its reliance on pitch, this sort of analysis nullifies many of the benefits Hessert
ascribes to intervallic analysis.
Lewin conceptualizes the importance of higher-order analytical capacity in terms
of charge, defined as the listener's degree of uncertainty as to what an upcoming
interval will be based on the intervals directly preceding it. His analysis goes up to sixthorder strings (that is, fifth-order computations based on intervals), but it occurs within a
highly idealized environment: a twelve-tone row independent of musical context, and
therefore independent of irregularities (e.g., partial presentations or reorderings of a row)
or complications (e.g., the division of a row into verticalities, leading to the creation of
melodic intervals not present in the original row) that would make such higher-order
analysis impractical.33
Based on this ideal environment, Lewin determines that if a listener is able to
remember the previous five intervals of Schoenberg's String Quartet, no. 4, the listener
can predict the sixth interval with complete certainty (assuming the row form in question
has not been altered or truncated). This certainty is not an accurate reflection of listeners'
perceptions of this row, though, even under ideal circumstances; if it were, Lewin argues,
the associated musical experience would be quite dull. Therefore, he concludes, the
listener probably only hears back two or three intervals perhaps more or fewer,
depending on motivic structure, complexity of the line's presentation, repetitiveness of
the line, and other factors, but probably not six. Thus, even if such higher-order analyses
were practical, they may not be a reasonable reflection of the listener's experience.

33
David Lewin, Some Applications of Communication Theory to the Study of TwelveTone Music, Journal of Music Theory 12, no. 1 (1968): 50-84.

31
Of course, one can imagine situations in which a less literal sixth-order analysis
would be appropriate. Although it seems unreasonable to expect a listener to remember
and consider six successive intervals, it seems quite reasonable for a listener to remember
a six-element contour, or six intervals expressed in the form of two or three verticalities.
To date, this form of entropy chunking has had almost no mention in the relevant
literature.
Hessert also raises the question of duration, finding it problematic that a half-note
chord root C is treated as equal to a sixteenth-note neighbor tone D in most entropic
analyses.34 Hiller and Bean reflect this same concern in their 1966 analyses of sonata
expositions, in which longer notes are weighted more heavily than shorter notes but
Hessert criticizes this approach for its lack of attention to attack, arguing that sixteen
sixteenth-note Cs are quite different perceptually from a single whole-note C.35 One
imagines that any method of computation that addresses both concerns would be
prohibitively complex; it seems most likely that any interested analyst must choose
whichever approach seems least inappropriate for that analyst's particular repertoire.
In any case, the most salient of Hessert's concerns that an ornamental tone is
treated as equal to a chord tone seems to be more an issue of interval than of duration,
since ornamental tones are approached by step more often than not (and since an
ornamental tone approached by a large leap is probably aurally surprising enough that it
ought well to contribute as much to the piece's entropy as the chord tone it ornaments). It
seems perceptually reasonable to claim that a whole step is a whole step, regardless of
whether that whole step connects a C and a passing D or a chord tone C and an adjacent

34

Hessert, 68.

35
Lejaren Hiller and Calvert Bean, Information Theory Analyses of Four Sonata
Expositions, Journal of Music Theory 10, no. 1 (1966): 96-137.

32
chord tone D. From a pitch-based perspective, the distance a line must travel to arrive at
the next pitch may be more relevant from the perspective of musical predictability than
how long the line stays on that pitch but even in the absence of pitch, it seems the
primary determinant of predictability is not the duration of each individual pitch, but
instead either the rhythmic pattern in which these pitches present themselves or the
presence or absence of attacks at certain metric positions.
Hessert cites one example of entropy calculations based on rhythmic patterns, an
unpublished 1959 Master's thesis by John Brawley (Indiana University). Hessert finds
this analysis problematic, since it relies upon an implicit invocation of an alphabet of
infinite cardinality, which makes the computation of relative entropy and redundancy
impossible. Additionally, Brawley sets forth no predetermined limits to what constitutes a
pattern. Does a dotted quarter followed by an eighth note constitute a rhythmic pattern?
If this configuration begins on a weak beat or is preceded by an eighth note, is it the same
pattern? Is pattern perceptually the same at M.M.=160 as it is at M.M.=40?36
Snyder advocates the exploration of duration-sensitive entropy calculations, but
he notes that such calculations almost necessarily conflate clock time with perceptual
time.37 In other words, by creating calculations based on the notated tempo we implicitly
privilege the former, which is less defensible given the degree to which analysis based on
entropy is meant to be a measure of listeners' perceptions of predictability. Of course, any
analysis that claims to be a reflection of perceptual time must almost certainly encompass
multiple musical domains beyond rhythm, tempo, and duration. In priviliging longer
notes over shorter ones, we run the risk of (for example) privileging extended neighbor
notes over the shorter chord tones they ornament.
36

Ibid., 45-50.

37

Snyder, 125-126.

33
Other than Rhodes, few analysts have attempted to deal with more than one
musical alphabet simultaneously. The notable exception is Hiller and Fuller's 1967
analysis of the op. 21 Symphony, in which pitch (not pitch class) is combined with the
number of eighth notes between successive attacks. Entropies are also computed on
various types of intervals. These entropy calculations are then used to draw conclusions
about formal sections of each section of the first movement. When pitch is considered
alone, results between zero-order entropy and first- or higher-order chains are
inconclusive; although the development is (as one would expect) the least predictable in
terms of individual pitches, its higher-order results are more predictable than either the
exposition or the recapitulation.38 These inconsistencies carry over into interval-based
and attack-point-based entropies.39
As mentioned, entropy is the quantity of information (measured in the number of
bits the message would require to store or transmit) that each letter of an alphabet
conveys. Hiller and Fuller also express their entropy in terms of bits per second (based on
the notated tempo) that is, examining entropy in terms of the rate at which information
is presented. Their hope is to distinguish between the listener's experience of a great deal
of information presented quickly, and the same amount of information presented over a
longer timespan. Interpreting entropy in terms of bits per second does not change the
entropy results for op. 21, but the idea bears investigation: that the speed with which
information is presented influences the audience's perception of its complexity.
Unfortunately, this measure cannot describe how evenly distributed information is across
this passage distinguishing, for example, a burst of information followed by silence

38
Lejaren Hiller and Ramon Fuller, Structure and Information in Webern's Symphonie, op.
21, Journal of Music Theory 11, no. 1 (Spring 1967): 78.
39

Ibid., 84ff.

34
from a passage with a continuous information rate. Of course, the accuracy of Webern's
notated tempos is problematic in any case, and the frequent ritardandos in his music make
it less plausible that a calculation of this type could be relevant to a performance. Despite
any practical limitations, though, the fact that entropy was considered in terms of the rate
at which information is received hints at an early connection between entropy and
diachronic analysis and arguably, an early connection between entropy and time, as
well.
Hessert gives four criteria for effective entropy-based analyses:
1.
2.
3.
4.

An alphabet should be finite;


Elements in an alphabet should be discrete;
Sample sizes should be as large as possible;
Analysis should be based on as many musical domains as possible.

The first two are basic criteria without which entropy calculations are impossible; the
second two are desiderata but not necessarily requirements. To these one can add that
entropy can most effectively analyze samples with low variances, since entropy is in
some sense a decontextualized measure of central tendency. Smaller sample sizes may
assist in analysis, if they serve to reduce variance; it is more effective to analyze a small
sample that possesses a given characteristic uniformly than to combine this sample with
another sample lacking this characteristic. Imagine a bimodal grade distribution in which
many students have a 90% average and many have a 60% average. Considering these
students in terms of two smaller sample sizes allows one to generalize about the data
easily, but combining the two samples yields both an unhelpful overall average and a
much higher degree of uncertainty. The same logic ought to apply to musical domains.
Considering data across multiple domains is useful, but considering multiple domains
simultaneously that is, combining entropies of different domains into a single entropy

35
measure may disguise tendencies in the data.
The cautions one can draw from the history of entropy in music are, for the most
part, no different from the cautions that apply to all analysis. In particular, entropy-based
analyses are problematic when they do not reflect musical experience. If one accepts that
all music analysis is necessarily metaphor that quantitative analyses are simply a
different way of exploring metaphor then the most important caution is that these
metaphors must be apt, rather than relying upon their quantitative nature to make their
arguments. If an analyst is careful to ensure that conclusions based on entropy are
reflective of musical experience and perception diachronic or synchronic then entropy
can prove a useful tool for analysis.

36
CHAPTER IV
ALPHABETS FOR ENTROPY-BASED ANALYSIS

Interval Entropy

As discussed previously, pitch class entropy is rarely useful for analysis of posttonal music. The table below gives pitch class entropy figures for a collection of posttonal vocal works; Youngbloods results for Schuberts pitch entropy provides a baseline
from tonal repertoire.

Work
Webern op. 15 (Fnf
geistliche Lieder),
without no. 540
Webern op. 16 (Fnf
Canons) and op. 15,
no. 5
Webern op. 25 (Drei
Lieder)
Babbitt, Widow's
Lament in Springtime
Youngblood's Schubert
sample

Style
Freely atonal

Pitch class entropy


3.58

Relative entropy
100%

Freely atonal canons

3.57

99.7%

Serial, based on a
3.58
derived row
Serial, based on an all- 3.58
interval row
Tonal
3.13

100%
100%
87.4%

Table 4.1: Pitch entropies in Webern works, compared with Babbitt and Schubert

No measures of statistical significance are necessary to interpret these results.

40
Since op. 15, no. 5 is a canon, it is included with the op. 16 canons throughout this
section.

37
Although pitch entropy is able to distinguish Schubert from Webern, it is unable to
distinguish between serial and freely atonal works, or derived rows and all-interval rows.
Even canons are seen as maximally unpredictable, although one imagines the second and
third voices are quite predictable indeed.
Intuitively, it seems entropy based on interval class should be able to distinguish
between these styles. Pitch class entropy can only recognize canons iterated at the same
pitch level, but a canon interpreted as a series of intervals should be recognizable at any
pitch level. Although the order-blindness of entropy somewhat limits its effectiveness for
canons, interval class entropy can at least distinguish a canon from a non-canonic piece in
the same style. The same logic applies to serial works; a serial work will generally have
lower entropy than a freely atonal work since any interval appearing in the row would be
repeated many times, while any interval not appearing in the row would be heard very
infrequently. (Similarly, a serial work based on an all-interval row should have roughly
the same number of all interval classes, whereas a work based on a derived row would
have roughly proportionate numbers of a few interval classes and very few of any others.)
Such a measure could be unable to distinguish between a serial work based on a derived
row and a freely atonal work saturated with the pitch class set that forms the basis of the
former's derived row, or a serial work based on an all-interval row and a freely atonal
work that simply exhausts the aggregate of interval classes regularly, but arguably, most
listeners would not be able to make this distinction, either.
These intuitions are somewhat flawed, in that they assume an idealized linear
presentation of a serial row. Vertical presentation of a portion of a row or the division of a
row amongst several voices will almost certainly create new intervals not represented in
the original row. Nevertheless, entropy is at heart a measure of predictability, and it seems
reasonable that it should reflect the listener's surprise at hearing an interval not linearly

38
present in the row, even if reflection of this surprise comes at the expense of the
construct's ability to identify a work as serial or non-serial.
Horizontal interval class analysis of the same works from Table 4.1 provides the
results shown in Table 4.2 and Figure 4.1.

Work

Interval class
entropy
Webern op. 15
2.57
Webern op. 16
2.48
Webern op. 25
2.35
Babbitt, Widow's Lament 2.72

Deviation at a 95%
confidence level
.04
.04
.05
.06

Relative
entropy
91.5%
88.3%
83.6%
96.8%

Table 4.2: Interval class entropies comparing serial and non-serial works

Figure 4.1: Interval class entropies comparing serial and non-serial works

39
These data indicate that interval class entropy is able to distinguish between
derived and all-interval rows, and between canons and non-canons from approximately
the same period. These are both important tests of the construct's effectiveness; its ability
to make these distinctions speaks toward its ability to reflect musical saturation and
predictability.
These distinctions are not retained when vertical intervals are included.

Work
op. 16
op. 25

Entropy (vertical and


horizontal intervals)
3.42
3.34

Deviation

Relative entropy

.05
.06

95.5%
93.3%

Table 4.3: Vertical and horizontal interval entropy on one serial and one non-serial work

Although op. 25's entropy is still lower than op. 16's, the difference is no longer
significant. In other words, based on these calculations we cannot posit a distinction
between Webern's use of verticalities in op. 16 and op. 25; if both pieces were played as
block chords, it is unlikely the listener would be able to distinguish between them based
solely on intervallic content.
Returning to the question of horizontal intervals, then, we find that removing
inversional equivalence eliminates many of the distinctions between these works, as
shown in Table 4.4 and Figure 4.2. Without inversional equivalence, relative entropies are
higher across the board since what was originally an emphasis on interval class 1
becomes a dual emphasis on registrally-ordered interval classes 1 and 11. Variances
increase for the same reason, which makes statistically significant distinctions less likely.

40
Nevertheless, registrally-ordered interval class entropy can still distinguish
meaningfully between canons and non-canons (Webern op. 15 vs. op. 16) and between
derived rows and all-interval rows (Webern op. 25 vs. Babbitt). The most interesting
difference between Table 4.2 and Table 4.4 is op. 25, which has a lower interval class
entropy than op. 16 but a higher registrally-ordered interval class (ric) entropy. This
distinction speaks to a fundamentally different approach to inversion between these two
works. In op. 16, an ric1 is not the same as an ric11, since a melodic ric1 in the clarinet
line could not be answered with an ric11 in the vocal line without breaking the canon.
Assumptions of inversional equivalence seem much more reasonable in op. 25, since the
juxtaposition of prime rows with inversional rows leads the listener to hear intervals and
their inversions as at least related, if not equivalent.

Work

Registrally-ordered
interval class entropy
Webern op. 15
3.40
Webern, op. 16
3.24
Webern, op. 25
3.34
Babbitt, Widow's 3.50
Lament in
Springtime

Deviation

Relative entropy

.06
.06
.06
.07

95.0%
90.5%
93.3%
97.8%

Table 4.4: Registrally-ordered interval class entropy in Webern and Babbitt

41

Figure 4.2: Registrally-ordered interval class entropy in Webern and Babbitt

The remaining oddity in these data is the similarity between Webern op. 15 and
Babbitt. To investigate this similarity, we expand intervallic entropy into interval entropy
(-72 < x < 72) and ordered directional interval class entropy (-12 < x < 12).41
Ordered directional interval class entropy bears few surprises. The data in Table 4.5 and
Figure 4.3 show the expected distinction between Webern and Babbitt, but from these
data no conclusions can be drawn about any of the Webern works examined almost the
opposite of the results generated by registrally-ordered interval class entropy.

41
72 (or six octaves) is a number chosen out of convenience the distance between the
highest and lowest pitch in any of these pieces, rounded up to the nearest octave.

42

Work
Webern op. 15
Webern op. 16
Webern op. 25
Babbitt, Widow's
Lament

Ordered directional
interval class
3.71
3.64
3.52
4.03

Deviation

Relative entropy

.08
.09
.11
.12

82.1%
80.1%
77.9%
89.2%

Table 4.5: Ordered directional interval class entropy in serial and non-serial works

Figure 4.3: Ordered directional interval class entropy in serial and non-serial works

43

The inferences to be made from this apparent inconsistency either that ordered interval
class is less a relevant structure in these Webern works, or that Webern's predictability in
terms of ordered interval class remains consistent across a variety of post-tonal styles
are at first alarming. Either conclusion makes suspect Hessert's claim that interval-based
entropy is capable of dealing meaningfully with works from disparate periods and styles
given that registrally-ordered interval class entropy lacks the generality to distinguish
between Babbitt and freely-atonal Webern, while another lacks the generality to
distinguish between Webern works of different styles and time periods. Perhaps the more
useful claim to draw from this perceived lack of generality is that any invocation of
intervallic entropy must be nuanced that in computing intervallic entropy we make
implicit assumptions about a given composer's approach to the interval, assumptions that
should be examined and argued.
One must also keep in mind that although statistically significant differences
between works imply differences in style, the lack of statistically significant differences
does not imply stylistic similarities. The lack of distinction between Webern's op. 15 and
Babbitt's Widow's Lament in terms of registrally-ordered interval class entropy does
not imply a fundamental similarity between these works' use of registrally-ordered
interval classes; rather, the differences between the works are simply not profound
enough for us to be certain that they imply a genuine stylistic difference. In short, an
unexpected significant difference between two works is noteworthy, but an unexpected
similarity need not be.
At the very least, these results demonstrate the utility of examining repertoire
from multiple perspectives on the interval. These results also hint at the possibility of
using various types of intervallic entropy as evidence in an argument against, for

44
example, accepting inversional equivalence as a given in analysis of a particular work.
Entropy computations for pure intervals, as opposed to interval classes, provide
the following results:

Work
Webern op. 15
Webern op. 16
Webern op. 25
Babbitt, Widow's
Lament

Interval entropy
4.92
4.75
4.98
4.91

Deviation
.11
.11
.16
.16

Table 4.6: Interval entropy in Webern and Babbitt

Figure 4.4: Interval entropy in Webern and Babbitt

45
Relative entropy is omitted here, because the maximal entropy of a 144-letter
alphabet is extraordinary large. As a result, these results would have extraordinarily small
relative entropies, which would give an impression of predictability not audible in the
music.
Although op. 16 seems to have a much smaller entropy than all other works
considered, this deviation is not statistically significant. Even if it were, the conclusions
drawn would be slightly problematic. One could not conclude even from significantly
smaller interval entropy that Webern op. 16 relies more upon smaller intervals merely
that the same intervals are repeated more often. This was not a useful distinction in study
of interval class entropy, since within a mod-12 space we can assume, at least to some
degree, that a few repeated intervals will become predictable regardless of absolute size
an assumption which rests upon the audience's ability to distinguish between intervals
easily, even if not consciously. By contrast, it seems problematic to credit most audiences
with the ability to distinguish between a minor 13th and a major 14th (which occur four
measures apart in op. 25, no. 2) without a tonal context. If intervals are not
distinguishable they cannot become predictable.
Still, it is reasonable that op. 16 should have the most precipitous drop in entropy.
Octave equivalence within each instrumental line is a tenuous assumption in any of these
works, but it is the most questionable in op. 16. A minor sixth in the dux answered by a
major thirteenth in the comes would constitute a break in the canon; they are
unquestionably different intervals for the purposes of this piece. The correlation between
a sharper drop in entropy and a less readily-assumed octave equivalence is only
reasonable if one assumes that a work in which octave equivalence holds will, more often
than not, consist of relatively many of one size of an interval and relatively few of its
mod-12 equivalents a work in which registrally-ordered interval class 2 is usually a

46
whole step and occasionally a major ninth, for example. This results in a narrower
distribution of intervals when one moves from registrally-ordered interval classes to
intervals, since most ric2s will stay whole steps, and only a few of them will become
major ninths. If an analyst is willing to make this assumption, then interval entropy can
provide support for an argument against octave equivalence.
Unfortunately, the variety of alphabet sizes (and more importantly, the variety of
letter frequency distributions within alphabets) make comparisons between different
alphabets untrustworthy for example, claiming that op. 16 has a lower relative interval
entropy than interval class entropy, when the rarity of very large intervals and the
frequency of very small intervals ensure that any work will have a larger interval entropy.
Even relative entropy cannot make this distinction reliably, since both the size of the
alphabet and the letters' distributions within that alphabet are at issue. Using another
piece as a basis for comparison may improve the trustworthiness of these statements, as
in this chapter's claim that op. 16's entropy decreases more sharply than op. 15's.
In short, for these test cases intervallic entropy accords with analytic expectations
for these works more often than not. The cases that seem dissonant against these
expectations are non-significant results, which need not prove either similarity or
difference; only significant results necessarily prove difference.

CSEG Entropy

One of the clearest points of disconnection between information theory entropy as


discussed thus far and our intuitions of musical predictability is entropy's inability to

47
reflect the predictability inherent in motivic material.42 Although intervallic entropy can
recognize the preponderance of a certain interval associated with the repetition of a
motive, it is unable to connect these intervals into larger chains without incurring the
computational difficulties of higher-order analyses.
To some degree this shortcoming can be addressed by the computation of entropy
on contour segments and contour segment reductions. Such an analysis is able to consider
spans of music as ordered, although these ordered spans are unordered within the larger
composition, the same way pitches are unordered in computation of pitch-based entropy.
The construct of CSEG entropy can be said to measure the degree to which a piece is
saturated with a few contours the likelihood that a listener will be able to guess the
direction of the next pitch, if not its distance.
To determine the CSEG class of a melodic segment, we rank pitches based on
their relative height; larger numbers represent higher pitches. Consider the first four
pitches of Example 4.1, shown on page 49. The lowest pitch, B, is represented by the
number 0; the next lowest pitch, C#, is represented by 1. This numbering process
continues until all pitches have been ranked. Thus, the first four pitches of Example 4.1
have the CSEG (3120) the highest pitch, then the second-lowest, then the secondhighest, then the lowest.
CSEG entropy analysis based on pre-identified melodic segmentations seems
problematic based on its potential to introduce the analyst's biases. It is likely that an
analyst would focus on the CSEGs he or she finds prominent, ignoring the contour
segments that do not contribute to larger patterns in other words, decreasing the entropy

42
For the purposes of the test cases in this chapter, I use 'motivic' to refer to a work in
which many pitches can be explained or predicted based on motives to which they belong, and
'non-motivic' to refer to a work in which few pitches can. This is an oversimplification, but the
test cases in this chapter generally fall very clearly into one classification or the other.

48
of a sample by concentrating on the events that are the most predictable. To avoid this
bias, this analysis includes every possible CSEG of a given length by examining a
moving window of that length. That is, for a CSEG of length 3, the contour segment
created from the first through third pitches of a composition will first be considered, then
the contour segment created from the second through fourth pitches, and so on. This
process is repeated for CSEGs of lengths up to six pitches.
Figures 4.5 and 4.6 show CSEG data for two test cases: one built entirely out of
random pitches (chosen between A0 and C8)43 and one in which the motive C4-C5-A4F4 is repeated continuously. These data show exactly what one would expect: a stream of
pitches saturated with motives has a much lower entropy than a stream of random
pitches. (The entropies of the motivic sample do not increase as cardinality increases
because so few CSEGs are represented in this sample.)

Figure 4.5: A randomly-generated string of pitches

43
These random pitches are based on data from random.org, which creates random numbers
based on atmsopheric noise.

49

Figure 4.6: A motivic string of pitches

Relative entropy is not useful as a description of these samples, since it is likely


that all possible cardinality-three CSEGs will appear in a sufficiently long sample, but
extremely unlikely that all possible cardinality-six CSEGs will appear. Unfortunately, this
makes comparisons between entropies of different cardinalities impractical.

Random
Motivic
Random
Motivic

3-CSEG entropy
2.64
1.50
5-CSEG entropy
6.67
2.00

Deviation
.05
.02
Deviation
.24
.02

4-CSEG entropy
4.68
2.00
6-CSEG entropy
8.13
2.00

Deviation
.14
.02
Deviation
.30
.02

Table 4.7: CSEG entropies for random and motivic strings

50

Figure 4.7: CSEG entropies for random and motivic strings

When comparisons are drawn between this random collection of pitches and the
first movement of Webern's op. 5, a highly motivic work, problems become apparent.
According to the data in Table 4.8 and Figure 4.8, from the perspective of contour the
first movement of Webern's op. 5 is less predictable than a collection of random pitches.
Examining the final tallies of cardinality-three CSEGs for each piece clarifies the
situation:

51

Random
Webern op. 5,
first movement
Random
Webern op. 5,
first movement

3-CSEG entropy
2.64
3.40

Deviation
.05
.07

4-CSEG entropy Deviation


4.68
.14
5.53
.14

5-CSEG entropy
6.67
7.41

Deviation
.24
.21

6-CSEG entropy Deviation


8.13
.30
8.65
.25

Table 4.8: CSEG entropies, random string versus Webern, op. 5, no. 1

Figure 4.8: CSEG entropies, random string versus Webern op. 5, no. 1

52

CSEG

Instances in the first movement Instances in a randomlyof Webern, op. 5


generated string of pitches

(000)

40

(001)

23

(010)

39

(100)

24

(011)

23

(101)

36

(110)

24

(012)

71

76

(021)

133

89

(102)

129

87

(120)

88

78

(201)

94

78

(210)

136

86

Table 4.9: CSEGs, random string versus Webern, op. 5, no. 1

53

Figure 4.9: CSEGs, random string versus Webern, op. 5, no. 1

Webern's op. 5 and many other musical works, for that matter are free to move
through all thirteen possible cardinality-three CSEGs. By contrast, a composition based
on randomly-selected pitches is effectively limited to six CSEGs, since the probability of
CSEGs like (000) and (010) occurring that is, the string of random pitches containing
repetitions is so small.
Perhaps there is some truth, then, to the claim that a string of random pitches is
more predictable in terms of CSEGs than the first movement of Webern's op. 5. Given
a melody based on a string of random pitches, we would be quite surprised to hear a
repeated pitch or, for that matter, two adjacent pitches in the same octave. A pitch above
middle C is more likely to be followed by a lower pitch than a higher one, and vice versa.

54
In these respects, at least, the random composition is quite predictable.
While developing a more useful control a string of pitches that contains each
possible CSEG the same number of times is simple enough, these same problems arise
on a less dramatic scale in comparisons between non-random pieces. Table 4.10 and
Figure 4.10 compare the first movement of op. 5 and op. 18.
Any resemblance at all between these results is contrary to intuition. One would
expect op. 5 (a thoroughly motivic work) to have a much lower entropy than op. 18 (one
of Webern's earliest serial works, characterized by Anne Shreffler as irrational and
disorganized44) but at the 3-CSEG level, op. 5 has a significantly higher entropy than
op. 18, and at no other cardinality is the distinction between the two significant.

op. 5, first
movement
op. 18
op. 5, first
movement
op. 18

3-CSEG entropy Deviation


3.40
.07

4-CSEG entropy
5.53

Deviation
.14

3.26
.07
5-CSEG entropy Deviation
7.41
.21

5.32
6-CSEG entropy
8.65

.14
Deviation
.25

7.33

8.80

.26

.21

Table 4.10: CSEG entropies, op. 5, no. 1, versus op. 18

44
Anne Shreffler, Mein Weg geht jetzt vorueber: The Vocal Origins of Weberns TwelveTone Composition, Journal of the American Musicological Society 47, no. 2 (Summer 1994):
277.

55

Figure 4.10: CSEG entropies, op. 5, no. 1, versus op. 18

We see similar problems when comparing two non-serial works, as shown in


Table 4.11 and Figure 4.11. Although op. 15 is much less consistently motivic than the
first movement of op. 5, op. 15 consistently has a lower entropy than op. 5, and for some
CSEG cardinalities the difference is significant.

56
3-CSEG entropy
3.40

op. 5, first
movement
op. 15 (excluding 3.20
the canon)
5-CSEG entropy
op. 5, first
7.41
movement
op. 15 (excluding 7.19
the canon)

Deviation
.07

4-CSEG entropy
5.53

Deviation
.14

.06

5.22

.12

Deviation
.21

6-CSEG entropy
8.65

Deviation
.25

.18

8.69

.23

Table 4.11: CSEG entropies, op. 5, no. 1, versus op. 15

Figure 4.11: CSEG entropies, op. 5, no. 1, versus op. 15

57
Perhaps the analogy of pitch entropy can clarify the situation. It is intrinsic to the
entropy formula that the number of pitch classes present affects the final computations
more than the distribution of those pitch classes, although both contribute. In other words,
from the perspective of the formula, it is more important that a wholly diatonic work use
only seven pitch classes than that it use (e.g.) scale degrees 1 and 5 more frequently than
scale degrees 2 and 6. The addition of chromatic pitches will increase the piece's entropy
more dramatically than a more even distribution of the seven diatonic pitches.
In this case, the eight (cardinality-three) CSEGs that do not involve pitch
repetition are analogous to the diatonic pitches, and the five CSEGs that do involve pitch
repetition are analogous to the chromatic pitches. For the purposes of illustration, we
return to the data from Tables 4.8-4.9 comparing op. 5 with a string of random pitches.
Example 4.3 shows a melody composed using the CSEG frequencies of the first
movement of Webern, op. 5, versus a melody composed using the CSEG frequencies of a
string of random numbers. To create these, common CSEGs were aligned with common
scale degrees in a major key (and since there are thirteen cardinality-three CSEGs but
only twelve scale degrees including alterations up to enharmonic equivalence, the leastcommon CSEG was omitted from consideration). Both pitch collections were then
aligned to C major and reordered to create the most plausible C major melody possible.
Figure 4.12 shows the results from op. 5:

Figure 4.12: Melody generated using the CSEG distribution from Webern, op. 5, no. 1

58

The results are not terribly jarring or dissonant, but neither are they terribly reminiscent
of C major, especially since the passage contains no Cs. By contrast, the results generated
from the random pitches seem much more aligned with C major:

Figure 4.13: Melody generated using the CSEG distribution of a string of random pitches

The unnevenness of the distribution in Figure 4.12 makes it more likely that an entire
sample could be generated without any instances of the tonic pitch, even though the tonic
was assigned to be the most common pitch. The distribution also leads to the appearance
of less common scale degrees, absent from Figure 4.13. In short, because the first
movement of Webern's op. 5 uses all thirteen possible CSEG classes, from the
perspective of CSEG entropy it is less diatonic, whereas the string of random numbers
is more diatonic. This causes the unintuitive results shown in Tables 9 and 10.
Similarly, when CSEG entropy is used to compare two freely-atonal works, the degree to
which these works repeat pitches will have a much greater effect on the results than is
justified.
Arguably, this disparity speaks to a fundamental difference in the way we as
listeners understand motivic predictability, as opposed to tonal predictability. When
listening for motives, we tend to remember the hits and forget the misses; a great deal of
non-motivic material can pass without the listener deciding the piece has become

59
unmotivic. (Certainly this is the case for the CSEG entropy algorithm, since it generates a
new CSEG starting on each pitch in other words, generating a lot of misses for each
hit.) By contrast the number of successive non-diatonic pitches the listener can hear
without concluding that the piece is non-tonal is much more limited. The entropy formula
is better suited to the latter, since the presence of chromatic pitches increases entropy
figures dramatically.
Nevertheless, CSEG entropy still produces valuable results when restricted to
serial music in particular, serial music in which repeated pitches are disallowed (which
disqualifies, for example, Babbitt's Widow's Lament in Springtime, in which the same
pitch is often attacked repeatedly without constituting a departure from the row).
Returning to the analogy of pitch entropy, comparing two serial pieces is akin to
comparing two wholly diatonic (or almost wholly diatonic) pieces; since there are no
chromatic CSEGs, the final results are able to reflect the distributions of the diatonic
CSEGs.
Table 4.12 shows CSEG entropies for opp. 18, 25, and 27. As expected, no
significant differences are found between opp. 25 and 27 for any CSEG cardinality. For
cardinalities three and four, the algorithm is able to distinguish between op. 25 (motivic)
and op. 18 (non-motivic). Given that the motives in op. 25 are all less than five pitches in
length, the clustering seen at higher CSEG cardinalities is to be expected.
This algorithm loses some of the motivic character of op. 27, since it cannot
account for retrogressions for example, in situations like Figure 15.

60

op. 18
op. 25
op. 27
op. 18
op. 25
op. 27

3-CSEG entropy
3.26
2.94
3.08
5-CSEG entropy
7.33
7.02
7.08

Deviation
.07
.07
.08
Deviation
.21
.24
.25

4-CSEG entropy
5.32
5.00
5.13
6-CSEG entropy
8.80
8.38
8.28

Table 4.12: CSEG entropies for serial works

Figure 4.14: CSEG entropies for serial works

Deviation
.14
.15
.17
Deviation
.26
.29
.31

61

Figure 4.15: Op. 27, no. 1, mm. 20-21

Although the CSEG algorithm is able to pick up this pattern each time it occurs in the B
section of the first movement, it cannot recognize the similarity between the two halves
of this motive. An algorithm that computed entropy based on CSEG prime forms instead
of CSEGs would be useless for cardinality-three CSEGs, but could potentially generate
useful results for higher cardinalities.
To address the problem of repetitions one could also alter the CSEG algorithm by
removing the least common third of the CSEGs present. In most cases this will remove
the repeated-pitch CSEGs, although if the repeated-pitch CSEG is motivic and occurs
frequently (as in a Berio Sequenza, for example), it would be retained. This allows the
algorithm to generate meaningful results for non-serial works.

PC-Set Entropy

Another possible extension of interval-based entropy is entropy based on pitch

62
class sets. Such a construct would stop short of reflecting intervallic organization fully,
but would better reflect proximity of intervals than simple interval entropy does. Lack of
regard for the ordering of pitches within a pc-set may even be an advantage, allowing the
analyst to communicate (for example) the degree of predictability audible through
tetrachodral combinatoriality in a way that ordered Markov chains cannot. Entropy based
on pc-sets is also a first step towards the computation of entropy based on
transformations.
The simplest possible form of pc-set entropy, if not the most useful, is computed
based on discrete pitch successions of a given cardinality in a single melodic line. These
data are based on brute force segmentations; the first three pitches in a line make up the
first trichord, the next three make up the second trichord, and so forth. Table 4.13 shows
horizontal (discrete) pc-set entropy in the vocal lines of opp. 16 and 25. As expected, pcset entropy is consistently higher in op. 16 than in op. 25, since the latter is based on a
derived row and uses a few pc-sets predictably.

op. 16,
op. 25,
Maximal
entropy

Trichords Deviation Tetrachords Deviation


4.33
.05
4.81
.05
2.74
.04
3.75
.05
3.58
4.87

Hexachords Deviation
5.10
.05
3.99
.05
5.64

Table 4.13: Pc-set entropies in op. 16 and op. 25 using discrete segmentation algorithm

63

Figure 4.16: Pc-set entropies in op. 16 and op. 25 using discrete segmentation
algorithm

Table 4.14 re-examines the same musical material, but using an algorithm
modified to accept overlapping pc-sets. To generate this data, each pitch of these vocal
lines was accepted as the beginning of a pc-set.
As in CSEG entropy calculations, the window pc-set algorithm creates a good
deal of background noise. Even if a composer relies heavily on just two or three pc-sets,
new pc-sets are created by the juxtaposition of these sets pc-sets that may or may not be
relevant to hearing or analysis of the piece. The algorithm attempts something
comparable to hearing all possible horizontal segmentations (of a given cardinality)
simultaneously, which makes it difficult to hear any particular segmentation clearly. For

64
this reason, the window-algorithm scores are almost all higher than those generated by
the discrete segmentation algorithm many of them dramatically higher.

Trichords

Deviation

Tetrachords Deviation

Hexachords Deviation

op. 16

3.87

.08

5.40

.13

7.13

.19

op. 25

3.80

.09

5.24

.14

6.44

.19

Table 4.14: Pc-set entropies for op. 16 and op. 25 using window algorithm

Figure 4.17: Pc-set entropies for op. 16 and op. 25 using window algorithm

65
Table 4.15 shows vertical pc-set entropy figures for these same works.

op. 16
op. 25, no. 1

Vertical pc-set entropy


2.76
3.30

Deviation
.10
.35

Table 4.15: Vertical pc-set entropy in op. 16 and op. 25, no. 1

Figure 4.18: Vertical pc-set entropy in op. 16 and op. 25, no. 1

66

Although the (014) trichord saturates both the vocal and piano lines of op. 25 when these
lines are considered independently, no similarly pervasive feature emerges from the
verticalities created between the piano and the voice even though verticalities presented
in the piano alone frequently feature an (014) subset. By contrast, the verticalities
presented between instruments in op. 16 are much more uniform.
Certainly none of these approaches seems completely appropriate on its own.
Opus 25, in particular, seems to necessitate an approach that deals with both the
horizontal and the vertical, given the rhythmic independence of the vocal and piano lines;
even if purely vertical data were of unquestionable validity, one would have difficulty
finding sufficient rhythmic alignments to support such an analysis. However, refining an
algorithm that would be able to decide the most appropriate segmentations of musical
material presents sufficient complication to place it beyond the scope of this project.
Computing entropy based on analyst-specified segmentations presents hurdles
avoided by computer-based segmentations. Entropy results for such segmentations
become questionable if any pitches are not included in a segmentation, since the
algorithm cannot account for singleton pitches. In dealing with pc-sets of non-uniform
sizes, subsets and supersets become an issue, although one could imagine a fuzzy
algorithm that counts pc-sets based on the percentage of pitches held in common between
an original pc-set and its subset. Elisions could be handled in the same manner.
Although these steps would refine the results given above, it seems unlikely that
they would change the results from Tables 4.14 and 4.15 in any meaningful way.
Computer-generated segmentations considering horizontal and vertical intervals
separately gives us an impoverished reflection of the work's organization but still reflects
that organization to some extent and (at least in this case) still provides reasonable results.

67
CHAPTER V
INFORMATION AND TIME

Many theorists have drawn connections between the information content of a


work often discussed in terms of predictability and the listener's perception of the
passage of time within that work. Barbara Barry writes
[W]ithin the speed at which a piece is played, the greater the density of
information as deviance from the norms of the system, the longer the time
experiences. In other words, within the tempo of the work, the larger the
amount of density (in number of notes, harmonic complexity, extended
range or whatever dimension) and the further it is from the norms, the
more effort (processing activity) is needed to draw it to them and shape
it coherently... [Greater density] requires more processing effort because
it has a greater uncertainty.45
In other words, an unpredictable work will be perceived as longer than a predictable one.
Predictable works can be fit into mental schemata and can be easily chunked, and
therefore require less processing time, whereas works that disrupt these schemata require
more effort on the listener's part. What constitutes a norm for Barry is flexible. She
uses this framework to argue that the beginning of the Eroica has a shorter perceived
length (relative to its actual length) than the beginning of Brahms's Symphony No. 3,
which she attributes to the latter's more variable rhythmic patterns, greater chromaticism,
and wider range. She then uses the same framework to argue for a similar contrast
between Webern's op. 22 Quartet and op. 21 Symphony.
Of course, these comparisons depend heavily upon the perceptions of each
individual listener. A listener who is familiar with the beginning of the Brahms but not
with the Beethoven may find the Brahms to have a shorter perceived length, since
familiarity has removed a great deal of the pieces uncertainty. Alternately, such a listener
may be processing a different kind of information in each piece, if increased knowledge
45
Barbara Barry, Musical Time: The Sense of Order (Stuyvesant, NY: Pendragon Press,
1990), 167.

68

of the piece allows for more in-depth diachronic analysis. Perhaps the listener is
comparing the current performance of the work to previous performances, deriving new
information and new sources of unpredicability from this comparison.
It also seems likely that musical works that deviate from the norm of a listeners
experience would bear a temporal learning curve. Kramer, in his anecdote regarding
Satie's Pages mystiques, reports feeling bored (and therefore, one imagines, feeling that
time was passing slowly), until he had adjusted to the works depiction of time; one
imagines a similar adjustment period would be experienced in a work much denser than
the listeners experienced norm. Since Kramer is comparing two works that both fall
outside common practice norms (and Barry is comparing two works that both fall pretty
squarely within them, at least by comparison with Satie), the complication this learning
curve introduces seems less problematic but a comparison between Brahms and Babbitt
would almost certainly require a more in-depth exploration of the works learning curves.
The context of the passage in question can also change the listeners perception of
time. That is, if we were to compare the beginnings of these symphonies third
movements, instead of their first movements, our perception of time would almost
certainly be altered (at least at first) by differences in the endings of the second
movements. Barry is careful to avoid this by concentrating on first movements, but
perhaps the listener could not find parking and ran into the concert hall at the last minute,
and therefore finds even a dense, unpredictable piece to be slow-paced.
Despite these complicating factors, the generalization that correlates a longer
perceived duration with unpredictability and greater density seems helpful as a
generalization, to be used when comparing two pieces in which all other variables
genuinely are equal, even if a listeners experience is almost certainly more complicated.
Stockhausen concurs with Barry's larger conclusions:
[T]he greater the temporal density of unexpected alterations the
information content the more time we need to grasp events, and the

69
more time we have for reflection, the quicker time passes; the lower the
effective density of alteration (not reduced by recollection or the fact that
the alterations coincide with our expectations), the less time the senses
need to react, so that greater intervals of experiential time lie between
the processes, and the slower time passes.46
This formulation resonates with Kramer's anecdotes describing the passage of time in a
highly repetitive work as shortened and the passage of time in an unpredictable work as
elongated.47 Kramer argues that the more easily chunked a melody is, the easier its
memorization will be, and the shorter its subjective duration. He cites as support an
experiment by psychologist Robert Ornstein, in which listening subjects identified a tape
in which sounds were played in a predictable order (each sound playing ten times before
the next sound was heard) as, on average, 75% the length of a tape in which the same
sounds were played in random order.48
Kramer also links information content and the listener's perception of musical
time with the presence and pervasiveness of musical discontinuity.
Discontinuity is a profound musical experience. The unexpected is more
striking, more meaningful, than the expected because it contains more
information. The power of discontinuity is most potent in tonal music,
which is the music par excellence of motion and continuity...Harmonically
defined goals and linear priorities for voice-leading provide norms of
continuity against which discontinuities gain their power.49
Kramer does not directly define discontinuity, but gives the example of a tape splice: the
listener is instantaneously transported from one sound world to another.50 Kramer's
46
Karlheinz Stockhausen, Structure and Experiential Time, Die Reihe 2, trans. Eric Smith
(Bryn Mawr, Pennsylvania: Presser, 1959), 64.
47

Kramer, The Time of Music, 379-380.

48

Ibid., 337-338.

49
Jonathan Kramer, Moment Form in Twentieth-Century Music, The Musical Quarterly
64, no. 2 (April 1978): 177.
50
544.

Jonathan Kramer, New Temporalities in Music, Critical Inquiry 7, no. 3 (Spring 1981):

70

discontinuities take many forms, but consistently involve the violation of the listener's
expectations, and, as a result, a high information content.
Kramer cites these discontinuities as giving rise to new experiences of time, often
phrasing these experiences in terms of the predictability of the music's progress towards
large-scale goals, events comparable to structural cadences. This progress forms the basis
of the temporal categories he sets forth in The Time of Music differentiating between
music that makes predictable progress towards a goal (directed linear time), music that
makes progress towards several different goals simultaneously or makes this progress in a
disordered or unpredictable fashion (multiply-directed linear time), and music in which
goals are unpredictable or nonexistent (nondirected linear time).51 When discontinuities
are so pervasive as to destroy any perception of linear connections between events,
moment time results; when there are neither discontinuities nor progress but simply
stasis, vertical time results.52 Generally, the distortion of perceived clock time is less
important to Kramer than the creation of subjective piece-time, but both Kramer and
Barry connect these perceptions of time with information and predictability.
Given these connections, the use of information theory to describe the passage of
time seems reasonable. After all, information theory entropy measures predictability, the
characteristic at the heart of Barry and Kramer's conceptions of temporality. Certainly we
cannot equate temporality with predictability and Kramer points out that even though
discontinuity often facilitates non-directed perceptions of time, the former need not
always imply the latter, and vice versa. Nevertheless, examination of a piece's
predictability may illuminate that piece's depiction of temporality, just as examination of
a piece's discontinuities does.
Although there are aspects of musical predictability that entropy cannot describe
51

Kramer, Time of Music, 452ff.

52

Kramer, New Temporalities, 549.

71

well (for example, scale degree 7 'predicting' scale degree 1, or a flat predicting a
descent), predictability in contextual music is often created in the way entropy measures
best: repetition. Entropy-based analyses make the implicit claim that the more a pitch is
repeated throughout the piece, the more referential power it bears and while this is often
true in tonal music, one can easily imagine melodies that imply a certain tonic without
ever stating it (for example, consistent use of a dominant seventh built on D to imply G
major). Frequent repetition of a few pitches is often symptomatic of tonality, but the
correlation between repetition and referentiality is not exact. This reliance on repetition
(as opposed to the relationships between scale degrees or harmonic context) seems less
problematic for contextual music, though. Kramer describes goals in Webern's music as
created through frequent emphasis, through reiteration and perseverance.53
Boykan discusses serial and canonic structure in the Concerto for Nine Instruments, op.
24, but attributes its aural intelligibility to rhythmic patterns and repeated row
segments.54 (Of course, Schenker values the goal-defining capabilities of repetition in
tonal music, as well: Only by repetition can a series of tones be characterized as
something definite. Only repetition can demarcate a series of tones and its purpose.
Repetition thus is the basis of music as an art.55) A strong relationship between
repetition and predictability or referentiality implies that information theory entropy can
be an appropriate analytical tool for this repertoire.
Kramer also notes that the relationship between continuity and linear temporality
is stronger in post-tonal music. In tonal music, he argues, directedness and implication
can occur in the absence of continuity, since an event may be implied by events that far

53

Kramer, Time of Music, 38.

54

Martin Boykan, Silence and Slow Time (Lanham, MD: Scarecrow Press, 2004), 144.

55
Heinrich Schenker, Harmony, trans. Oswald Jones (Chicago: University of Chicago
Press, 1954), 5.

72

precede it.56 However, he claims that post-tonal music can only be linearly directed if
clear continuity exists, something he attributes to voice leading or to directed processes in
other musical domains (for example, the gradual crescendo and range limitations that
create directedness in Ligeti's Atmospheres).57 Happily, of all the potential sources of
linearity in music, continuity is the easiest for information theory to detect. Parsimonious
voice-leading can be understood as more frequent use of a few small melodic intervals
and although entropy cannot tell us much about interval size, it is very good at describing
interval frequency. One imagines entropy calculations on something like dynamic CSEGs
(in which ppp pp p is represented as (012), for example) could describe the
directedness of Atmospheres, as well.
Entropy can also be used to detect discontinuities. Doubtless a musical
discontinuity involves change whether this change arises from an unexpected shift to a
new pitch collection, the establishment of a new meter, the dissolution of an established
motive, or any of a variety of possible changes. In each case, a higher entropy score
arises when the passage is examined as a whole. While these changes may not be evident
in pitch-based entropy, they may be evident in an alternative alphabet; for example, pitchbased entropy cannot detect discontinuities that arise from a motive being dissolved, but
CSEG-based entropy can, as can entropy based on small-cardinality non-overlapping pcsets (if, for example, the string 123456 123456 124536 124536 is analyzed in terms of
non-overlapping trichords).
Of course, what constitutes continuity or discontinuity is itself disputable. Hasty
finds continuity in Stravinsky's Symphonies of Wind Instruments, a work that Cone
characterizes as 'stratified' (possessing "the separation in musical space of ideas...

56

Kramer, Time of Music, 21.

57

Ibid., 39.

73

juxtaposed in time") and that Kramer claims as an early example of moment time.58 For
Hasty, continuity and connection can always exist between events, insofar as the listener
is able to compare and relate them. Similarly, Hasty argues, the listener is able to place
events in time based on when they occur in the piece, which creates directionality.59
Bearing this interpretation in mind, the fairest label for what entropy detects seems to be
the potential for discontinuous hearings. A listener who is predisposed to hear two
musical moments as related can probably find reason to do so, but a higher entropy score
implies that these comparisons will be made based more on differences and violated
expectations than on similarity and predictability.
Overall at least for this repertoire we see a lower entropy score correlated with
the factors that lead to Kramer's linearly directed musical time (continuity; repetition of a
few elements) and a higher score correlated with factors that lead to multiply-directed or
non-directed time (many goals repeated with more or less equal frequencies;
discontinuities).60 Of course, there are other factors involved in the perception of musical
goals, things entropy cannot detect easily: symmetrical structures, presenting a referential
pc-set at the very beginning of a work, and dynamics, among others. Because of these
factors, an analyst completing an entropy-based analysis faces the same obstacles s/he
would using any other analytical device: the results must be used in the context of their
musical ecosystem and should either resonate with musical experience or conflict with it
in an interesting and meaningful way.

58
Christopher Hasty, On the Problem of Succession and Continuity in Twentieth-Century
Music, Music Theory Spectrum 8 (Spring 1986): 62ff; Edward Cone, Stravinsky: The Progress
of a Method, Perspectives of New Music 1, no. 1 (Autumn 1962): 18; Kramer, Moment Form,
178ff.
59

Hasty, Succession and Continuity, 62.

60
This correlation seems slightly more problematic when comparing works by composers
from different periods claiming, for example, that a more chromatic tonal work is necessarily
less directed than a more diatonic tonal work.

74

Let me be clear what my aim in presenting these correlations is not:


Hegel argues that science cannot cope with the unrest of temporality, and
is therefore forced to use 'a paralyzed form, viz., as the numerical unit,...
which... reduces what is self-moving to mere material, so as to possess in
it an indifferent, external, lifeless content'... Temporality is drastically
constrained in this quantitative rendering.61
My discussion of musical time depends upon quantifiable results, but my goal is not to
present a quantification of musical time against Hegel's cautions, or to equate subjective
musical time with Adorno's rationalized, clock-bound time through such quantification.
Although entropy results are numerical and objective, they are in some sense metaphors
for a musical experience; their goal is not to reduce piece-time but rather to describe it.
These results are not conclusions in and of themselves, but are instead evidence for a
particular hearing only useful when contextualized.
It is also worth remembering that entropy describes the language in which a
message is written, not the message itself.62 We can think of entropy as measuring the
degree to which a language facilitates goal-directedness, independent of whether or not
the composer establishes a goal in a particular composition. Low entropy is indicative of
a large amount of repetition, which often imbues that language with goal-oriented
tendencies; the repeated event will probably be heard as a goal, and a composer would
have to work actively to prevent that hearing. Extremely low entropy implies something
like Kramer's vertical time; directedness is unlikely because the music never leaves its
referential sonority (or motive, ostinato, or rhythmic pattern), but is still possible, through
dynamics, timbre, tempo, or other devices. High entropy indicates very little repetition,
which implies a fair amount of discontinuity and while a composer working in such a

61
Robert Adlington, Musical Temporality: Perspectives from Adorno and de Man,
Repercussions 6, no. 1 (Spring 1997): 11.
62
Entropy relates not so much to what you do say, as to what you could say. Shannon and
Weaver, A Mathematical Model of Communication, 5.

75

language can still establish and communicate musical goals, creating these goals becomes
much more difficult. It is likely that these goals would be more difficult for the listener to
hear, as well, while it is entirely possible for the listener to predict goals the composer
never intended in a work with lower entropy. In short, the fairest interpretation of a low
entropy score is, This passage is written in a language that makes goal-oriented hearings
easier for the listener and the composer and which is more likely to facilitate perception
of time as faster. These claims can be illuminative when presented in an analytical
context, but they are not necessarily conclusions in and of themselves.
Much existing work on perception of temporality in Webern can be understood in
terms of information content and density. Johnson does not adopt temporality as his
focus, but his description of time in Abendland III is noteworthy. He describes the song
as one in which the suggestion of linear movement is largely undermined by the
tendency of each line to contract into a series of ostinati, as if acknowledging thereby that
the 'wanderings' of each line are essentially without direction.63 Underwood draws a
similar association between ostinati and stasis in op. 10.64 In each case, a period of stasis
is created through a reduction in the amount of information being received, while the
more information-rich surrounding material is associated with a greater sense of
propulsion.
Of course, this statement cannot be made for all ostinati; the cello line in op. 5,
no. 3, is a clear counterexample, in which the presence of an ostinato increases the
music's sense of propulsion, rather than decreasing it. Perhaps this is because the ostinato
in op. 5, no. 3 does not constitute a reduction in information content. Since it begins the
movement, we do not have a more information-rich source of comparison; if no. 3 began
63
Julian Johnson, Webern and the Transformation of Nature (Cambridge: Cambridge
University Press, 1999), 139.
64
James Underwood, Time and Activity in Webern's Opus 10, Indiana Theory Review 3,
no. 2 (Winter 1980): 34-35.

76

with a more complex cello line and then shifted to the ostinato after a few measures, the
listener would be more likely to hear it as stalling, comparable to Johnson's description of
Abendland III. Alternately, if we compare the beginning of no. 3 with the ending of no.
2, we find a dramatic acceleration. Even though the first six measures of the cello line of
no. 3 have almost the same information content as the last four measures of no. 2, no. 3
presents this information at a much faster rate.
Underwood notes that in Webern's op. 10, thinner, pointillistic textures unfold
over a shorter period of time than denser, lyrical or polyphonic passages.65 This matches
Barry's claims that, ceteris paribus, a dense passage requires more time to process than a
sparse one, as a result of its greater information content.
Barry also compares perception of time in the op. 22 Quartet and the op. 21
Symphony, with particular interest in the schemata through which the listener processes
the information s/he receives musically. Barry notes that both serial works necessitate
more difficult learning processes, because they contain little or no redundancy and it is
redundancy, as repeated ideas, sections and closing figures which enables difficult
information to be grasped and shaped.66 Nevertheless, in the Quartet Barry finds
compensating factors of audible intelligibility of unity and overall design, as well as
continuity presented through counterpoint, especially in the first movement qualities
she finds missing in the Symphony, whose greater pointillism resists perceptions of
continuity and audible structure.67 Barry cites continuity and the listener's ability to
perceive form as contributors to the listener's perception of time as directed in tonal
music; she finds their absence in op. 21 to be a cause of perceived spatial time. Lippman
concurs: "[E]very feature of music that makes for continuity and propulsiveness and logic
65

Ibid., 32-34.

66

Barry, Musical Time, 225

67

Ibid., 227-228; 218.

77

here is systematically excluded, from sheer tonal persistence and scalar progression, to
forces of resolution and almost all perceptible varieties of repetition."68 In short, Barry
and Lippman find spatial time to be created by an influx of information that the listener is
not able to chunk into large units or process quickly enough as single pitches.
Rochberg describes much of Webern's music, in particular the pointillistic works,
as independent of rhythmic periodicity and any sense of direction fostered by that
periodicity. Of the beginning of the Variations for Piano, op. 27, Rochberg writes: The
beat and meter is now a frame, not a process a frame on which to construct symmetries
of pitch and rhythm.69 In the Concerto for Nine Instruments, op. 24, by contrast,
Rochberg finds an intensity of rhythmic drive, and therefore direction.70 The
distinction is one of predictability brought about by periodicity. Rochberg argues that in
the first movement of op. 27, the beats... have no more relation to each other than the
seconds which a clock ticks off that [u]nless the initial pulse of the meter is activated,
propelling itself upon the next pulse and the next, the beat and the meter become static
entities, succeeding each other but not progressing to each other.71
Franchisena speaks of something similar:
The strong beats of a measure acted as the guiding idea that ruled the order
of consonant and dissonant intervals. In the twentieth century... the arc of
tension and release remains in effect, but now is freed from its positioning
by weak and strong beats.72
68
Edward Lippman, Progressive Temporality in Music, The Journal of Musicology 3, no.
2 (Spring 1984): 134.
69
George Rochberg, The Aesthetics of Survival: A Composer's View of Twentieth-Century
Music (Ann Arbor: University of Michigan Press, 2004), 101.
70

Ibid., 103

71

Ibid., 102

72
Cesar Franchisena, El tiempo en la composicion actual, Revista del Instituto Superior
de Musica 3 (Nov. 1993): 118. (Es asi que el tiempo fuerte de comps se comportaba como la
idea directriz que rega el orden de los intervalos consonantes y disonantes, etc. entrado el s.
XX... [e]l arco tensin-distensin permanece vigente, pero ahora liberado de su ubicacin entre
los tiempos dbiles-fuertes.)

78

Franchisena refers to this as vital time something determined by qualities intrinsic to


a pitch considered alone (height, dynamics, duration, and timbre). As an example, he
examines a single pitch from the fifth measure of op. 9, no. 5, only mentioning the rest of
the piece to contextualize the pitch's duration (expressing it as a fraction of the work's
total length, rather than as an eighth note). This approach seems justified by the pitch's
freedom from metrical accent, as in Rochberg's description of the first movement of op.
27; because the surrounding material does not impose any metrical obligations (i.e.,
metrical predictability) upon this particular pitch, Franchisena is free to consider the
temporal weight of this single pitch. Trcke takes a similar but less extreme view,
depicting Webern's music in terms of a single unchanging moment presented repeatedly
in different forms.73
In some sense, this is the spirit in which Forte and Escot's proportion-based
analyses are undertaken. Both demonstrate similarities that cross the borders between
discontinuous sections Forte hearing these passages in terms of duration, and Escot in
terms of duration, tempo, pitch range, and other factors. Both Forte and Escot aim to
demonstrate large-scale structures through these proportions.74 One imagines that if these
structures were audible, they would not help the listener predict upcoming events, but
could help the listener identify the placement of section endings and beginnings as
reasonable in retrospect and thereby providing a framework for old and new musical
information.
Considered together, these pointillism-based perspectives on temporality call to
73
Berthold Trcke, Ein Stehenbleiben, das in die Welte geht: Der Gestus der Zeit in
Weberns Sptwerk, Musik-Konzepte (November 1984): 12-13, quoted in Brunhilde Sonntag,
"Adornos Webern-Kritik," Musik im Diskurs Band 2: Adorno in seinen musikalischen Schriften
(Regensburg: Gustav Bosse Verlag, 1987), 149-150.
74
Allen Forte, Aspects of Rhythm in Weberns Atonal Music, Music Theory Spectrum 2
(1980): 90-109; Pozzi Escot, Toward a Theoretical Concept: Non-linearity in Weberns op.
11/1, Sonus 3: no. 1 (Fall 1982): 18-29.

79

mind Hastys and Kramer's hearings of the Stravinsky Symphonies of Wind Instruments:
the latter examining discontinuities and accepting the work's musical moments as discrete
units, and the former identifying similarities and interactions among these moments. This
distinction demonstrates a crucial point: although the rate at which the listener receives
information shapes his or her perception of time, what the listener chooses (or is
predisposed) to do with that information may have an even stronger effect. Within the
radio transmission metaphor, this situation is comparable to how the receiver chooses to
decode and interpret the original message. Any information-theoretic analysis that
extends beyond the presentation of statistics and into the interpretation of statistics
represents such a choice.
A piece's depiction of temporality and that piece's communication of musical
information are by no means coterminous, but the latter almost certainly influences the
former. By examining the nature of information in a given musical language, we gain
insight into the role of musical expectation and prediction in works composed in that
language, as well as a quantifiable metaphor for subjective musical time.

80
CHAPTER VI
ANALYSES

The analyses of this chapter, which discuss the first of the op. 16 Fnf Canons and
the fourth of the op. 5 Fnf Stze, deal with contrasting temporalities as determinant of
form. My hope is to demonstrate, through these analyses, the flexibility and utility of
information theory entropy in analytical contexts, as well as the connections between
information theory entropy and perceived time.

Op.16, no. 1: Christus factus est

Opus 16 (Fnf Canons) is the last of Webern's fully nonserial works, composed
from 1923-1924. Shreffler hypothesizes that Webern used these canons and the strict
pitch ordering they necessitate as a way of coming to terms with serial technique; she
notes that serial operations and invariance are in play throughout op. 16.75 Fittingly, the
first of these canons (the last one composed) articulates movement from one type of
complexity and one type of temporality into another.
A listener who is attentive enough to hear Christus factus est as a canon will
hear a clear rupture between the first and second phrases, when the canon breaks in m. 8.
The break is not subtle: the soprano enters forte half a beat earlier than expected,
initiating a string of accented eighth notes. Both the clarinet and bass clarinet are silent on
the first beat of this measure, giving the soprano her first unaccompanied moment in the
piece, and drawing attention to the canon break.
From a structural perspective, this canon break establishes the beginning of a new
section; the section is just as clearly ended when the canon breaks again in m. 11.
75

Shreffler, Mein Weg..., 302.

81

However, these canon breaks also facilitate the creation of many of the musical
characteristics that distinguish the second phrase from the phrases that surround it.
Perhaps most noticeably, the break in m. 8 initiates a period of metric confusion that lasts
until the canon breaks again in m. 11; when the soprano line enters a quarter note early in
m. 8, it disrupts the half-note tactus that had been established unmistakably by the
straight half notes that began each melodic line (and evidenced by seven measures of cut
time). To accommodate this metric disturbance, m. 8 shifts (jarringly) into 5/4. Measure 9
attempts to resolve this disturbance by returning to cut time a hearing reinforced by the
sopranos second iteration of this sections eighth-note motive, which creates a
perceptible downbeat on the word Deus but the metrically-disturbed melodic line
pushes the listener back into 5/4 for measure 10. Canon figures in mm. 8-10 occur closer
together, either as a result or as a cause of this displacement. The soprano and clarinet,
originally separated by two half notes, are now separated by one half note; the soprano
and bass clarinet, originally separated by one half note, are now separated by a quarter
note a jolting change from the first phrase, but one that allows a return to (something
like) a half-note tactus.
As one might expect, the metric ambiguity of mm. 8-11 introduces greater
rhythmic complexity. Until m. 8, the listener experiences no rhythm more complicated
than a dotted-quarter/eighth pattern, whereas the second phrase is characterized by a
prominent stream of eighth notes that become syncopated in m. 10. In m. 10,
maintenance of the canon combined with irregularly-placed downbeats (to accommodate
stresses in the text) create a tie over a barline in the bass clarinet the first time this has
been heard in the piece thus far. By contrast, in m. 11 each part returns to its original clear
half-note rhythm, although a metrical displacement dissonance is evident among the
parts. This more tranquil rhythmic character is made possible by another break in the
canon, occurring in the second half of m. 11, at the beginning of A'.
This last canon break creates a sort of melodic variation. Although mm. 11-13 are

82

an audible return to the opening material, the canon has not fixed itself; what began as
a one-beat canon interval in mm. 1-7 becomes a half-beat interval in mm. 11-13. The
result is a stretto of the opening material, and a synthesis of the A sections melodic
contour with the metric disturbance of the B section.
Vertical minor seconds make this change even more audible, as shown in Figure
6.1. The A section features many prominent minor seconds attacked on beats. In the B
section, minor seconds contribute to a larger impression of metrical instability, sounding
on every eighth-note of m. 8 (once the A section has begin), on every beat of m. 9, and on
every off-beat of m. 10. When the melodic character of the A section returns in A, the
minor seconds have become metrically displaced between voices, beginning in one voice
half a beat earlier than in the other as though passage through the less-stable B section
has made the A sections minor seconds less stable or less predictable. In all sections,
these vertical minor seconds usually occur with written accents or subito dynamic
changes, adding to their aural salience.
The ternary form established by these events is form also supported by pitch
content. The soprano line of the first phrase contains only eight pitch classes: A, Bb, B,
C, C#, D, D#, and F#. Of these, D, A, C#, and Bb are by far the most prevalent; D, C#,
and Bb occur four times each, and A occurs three times. By contrast, the other pitch
classes occur at most twice; this is especially noteworthy given that the first phrase
encompasses less than eight measures, with an average of fewer than three notes per
measure. These four most prominent pitches form inversionally-related pairs at I11
(connecting A with D, and Bb with C#), which allows these four pitches to be similarly
prevalent in the bass clarinet line, which is related to the soprano line by I11. 76
76
Perhaps it is noteworthy that, much as Ligeti claims that Webern's music groups around a
central axis temporally, this inversion causes it to group around a central axis in pitch-space: its
axis of inversion. It is this axis that allows for the tonal qualities of the first phrase.

83

Figure 6.1: Vertical ic1s in op. 16, no. 1

84

Figure 6.1, contd.

85

In fact, the prevalence of these pitches is so thorough in the bass clarinet and
soprano that it requires very little analytical perversity to interpret the first phrase of these
lines in D minor. The most common pitch classes are scale degrees 1, 5, 7, and b6. The
bass clarinet line begins with clear 5-7-1 motion, and the soprano with a slightly weaker
1-b6-5; the first phrase ends with 5-3-7-1 in the bass clarinet, and 1-#3-6-5 in the soprano
(implying a half-cadence at the end of the first phrase, mm. 7-8). In these lines, melodic
cells in the first phrase almost always end on D or A; the only notable exception is m. 4 in
the bass clarinet.
Of the less frequent pitches, D# only appears in the soprano as an enharmonic b2
on its way to 1 (consider the octave-displaced neighbor figure D#-C#-D in m. 7, or D#F#-D in mm. 3-4), and its counterpart Ab in the bass clarinet only appears as #4 on its
way to 5 (as in the corresponding Ab-Bb-A neighbor figure in mm. 6-7, or Ab-F-A in
mm. 3-4). The remaining pitches generally serve as ornamentation of the four structural
pitches, as in the (octave-displaced) descending line D-C#-C-B-Bb-A heard in mm. 5-6 of
the soprano. Scale degree 3 is slightly more problematic, appearing as F in the bass
clarinet and F# in the soprano, but it is always in close proximity to scale degree 1, and
can usually be interpreted as filling in the third-gap between scale degrees 1 and 5
(consider m. 7 of the bass clarinet).
Of course, this interpretation requires an analyst who is willing to ignore the
clarinet line completely (since it can be read in C minor, a major second below the
soprano line). It also ignores octave displacement, which is jolting enough in places to
make any tonal interpretation difficult. Finally, although both the soprano and bass
clarinet lines can be read in D minor, the harmonic implications of their respective scale
degrees rarely line up; it is easy to hear the bass clarinet in m. 7 as 5-3-7-1 when the line
is played in isolation, but slightly more difficult when this line is overlaid with b2-7-1-#3
in the soprano.
Nevertheless, one can argue that the first phrase is markedly more tonal than the

86

second phrase. Leaps of intervals 11 and 13 are built into an eighth-note motivic cell that
characterizes this phrase; in these measures, the soprano has three leaps of interval 11 and
four of interval 13, and these intervals are of course preserved in the clarinet and bass
clarinet lines. This allows Webern to exhaust the aggregate by the last beat of m. 8. with
the exception of B, which never appears in this phrase.77 Whether or not aggregate
completion per se is an audible feature of this phrase is debatable, but the speed with
which the aggregate is completed certainly speaks to the chromatic nature of this phrase.
Almost paradoxically, the leaps of intervals 11 and 13 that characterize this phrase
allow the canon lines to have more pitch classes in common than were found in the first
phrase. The first four eighth notes in the clarinet and bass clarinet have identical pitch
class contents but in retrograde (possible since the pitch class set {4, 3, 6, 5} is invariant
under I9) a tradeoff made more noticeable by the closer timing of canon entries. The
eighth note pattern E-D#-D#-D in the soprano in m. 9 can be heard as a fuzzier echo of
this same pitch class set (bearing in mind that this set is immediately followed by a D in
its first and more prominent entrance). These similarities in pitch content break down
fairly quickly, making it difficult to argue them as a unifying feature of the second phrase,
but it is no stretch to consider them a device that emphasizes the break between the first
and second phrases.
When the opening melodic material returns in m. 11, it returns in a highly
chromatic fashion (relative to the postulated D minor of the A section). The first two
notes are preserved both in rhythm and pitch closely enough that the return of the
opening melodic material is easily audible and the overall contour of the melodic lines
in mm. 11-13 is strikingly similar to the contour of mm. 1-4. The melodic line repeats no
pitches and suggests no tonal interpretations. One can argue that this progression
77
However, this interpretation requires that the Bb and A on beat one of m. 8 in the soprano
be counted as part of this aggregate. Although they are part of the first phrase, they overlap with
the beginning of the second phrase in the clarinet.

87

represents a development of the melodic material of the A section; having passed through
the highly chromatic B section, the melodic material becomes chromatic itself. It is
certainly noteworthy that the most tonal section of Christus factus est is also the most
rhythmically straightforward; in fact, we can argue that the rhythmic ambiguity of the
second phrase imbues the return of the first phrase with rhythmic complexity, much in the
same way that the chromatic content of mm. 8-11 imbues mm. 11-13 with chromaticism.
Despite these sharp distinctions, at first glance pitch class entropy shown in
Table 6.1 and Figure 6.2 tells us very little about Christus Factus Est. These numbers
mask the distinctions drawn above. Strong tonal implications are present in each line of
mm. 1-8, but the three lines implying different keys cancel each other out as far as
entropy is concerned, leading to higher-than-expected entropy results. Although mm. 811 do not present tonal implications, these measures still contain repeated pitches and,
as mentioned, quite a few pitch repetitions between instruments. Measures 11-12 present
no repeated pitches (within any given part), but these measures also contain only twentyfive pitches total, creating an extraordinarily large confidence interval.

mm. 1-8
mm. 8-11
mm. 11-13

Pitch class entropy


3.31
3.31
3.43

Deviation
.23
.25
.38

Table 6.1: Pitch class entropy in op. 16, no. 1

88

Figure 6.2: Pitch class entropy in op. 16, no. 1

Isolating the vocal line yields results closer to our intuitions for this piece. While
the results in Table 6.2 and Figure 6.3 show exactly what one might expect lower
entropy in the vaguely-D-minor A section and higher entropies in later sections none of
these results is statistically significant. As mentioned, the A section contains only eight
pitches; the A and B sections contain a slightly more respectable twenty-three and
nineteen, respectively, but neither sample is large enough to generate significant results.78

mm. 1-8, voice only


mm. 8-11, voice only
mm. 11-13, voice only

Pitch class entropy


2.97
3.20
3.28

Deviation
.32
.42
.54

Table 6.2: Pitch class entropy in the vocal line of op. 16, no. 1

78

Respectable from the perspective of sample size, that is.

89

Figure 6.3: Pitch class entropy in the vocal line of op. 16, no. 1

Analysis of CSEG entropy (omitting the least common CSEGs) leads to the results given
in Table 6.3.

mm. 1-8
mm. 8-11
mm. 11-13

3-CSEG entropy
2.13
3.15
2.21

Deviation
.17
.36
.24

4-CSEG entropy
2.61
3.57
3.23

Table 6.3: CSEG entropies in op. 16, no. 1

Deviation
.26
.45
.41

90

Figure 6.4: CSEG entropies in op. 16, no. 1

The A section is built from a series of arches; the A' section is a single arch. Compared
with these, the B section is much more complicated in terms of contour, despite the
prominent eighth note pattern in mm. 8-9.
Analyzing Christus factus est in terms of interval class and registrally-ordered
interval class entropy leads to interesting results indeed.79 In terms of melodic interval
class content, each section is significantly distinct from the other sections and what was
intuitively the most chromatic section in terms of pitch becomes the least chromatic in
terms of interval class. It is almost as if the pitch and interval contents of the A and A
sections are inverted across the axis of the B section; we pass through a period of contour
complication, and when we emerge on the other side, pitch and interval entropies have
been reversed.
79
This analyses also points out the convenience of entropy measures. The pitch-based
argument for ternary form took several pages, but since the interval-class argument is based on
entropy, it takes no time at all.

91

Interval class entropy


2.13
2.44
1.64

mm. 1-8
mm. 8-11
mm. 11-13

Deviation
.14
.18
.16

Table 6.4: Interval class entropy in op. 16, no. 1

Figure 6.5: Interval class entropy in op. 16, no. 1

Pc-set entropy data supports these conclusions.

Trichords
mm. 1-8
2.55
mm. 8-11 2.19
mm. 11-13 2.5

Deviation
.11
.10
.14

Tetrachords
3.13
2.44
2.25

Deviation
.11
.08
.09

Hexachords
2.92
2.25
2

Table 6.5: Discrete pc-set entropies in op. 16, no. 1

Deviation
.08
.05
.05

92

Figure 6.6: Discrete pc-set entropies in op. 16, no. 1

For tetrachords and hexachords, we see pc-set predictability increase over time, as
intervallic predictability increases.
In other words, Christus factus est articulates a progression from one type of
organization to another. In terms of pitch, the A section represents a corrupted version of
linear directed time corrupted by the canonic repetitions, which obscure the tonal
content of each individual line. The B section lacks this tonal organization, but achieves
similar results by sheer momentum; without context it seems most likely the listener
would interpret this section as an outgrowth of the vaguely-tonal A section, a period of
chromaticism setting up a return to the diatonicism of an expected A. Even if the listener
makes different predictions for the B section, it seems clear that the section is headed
somewhere established by meter changes and by shorter rhythmic durations than heard

93

thus far. But rather than returning to anything diatonic for the A section, Webern instead
presents a chromaticized version of the contour that opened the piece. This last section
occurs in non-directed linear time; the low interval-class entropy provides a sense of
continuity, but not in a way that prompts the listener to predict goals. Thus, Weberns
progression from pitch class organization to interval class organization can also be
understood as a progression from the directional to the adirectional.
This progression is a salient feature of the text, as well. The first phrase of the text
is temporally unambiguous: Christ became obedient even unto death, death on a cross.
Instead, it is the second phrase (along with the text that accompanies the return of the first
phrase musically) that introduces temporal ambiguity, by calling into question the
succession of events discussed: Because of this, God raised Him and bestowed upon
Him a name which is above all other names. From a temporally superficial perspective,
Christ possessed the name to which the text refers long before the events of the
crucifixion upon which this obedience depends and through which this obedience was
demonstrated (at least insofar as the term 'before' retains meaning in this context); thus,
the effect (exaltation) precedes the cause (crucifixion). Traditionally, Catholic theology
offers the explanation that these events are in fact simultaneous (since in the Eternal...
nothing passes away, but the whole is simultaneously present).80
Indeed, this simultaneity is echoed throughout the texts of the Latin canons.
Webern described the texts of the original three canons as representing a coherent
narrative, progressing from birth (Dormi Jesu) to crucifixion (Crux Fideles),
followed by a reflection in the form of an invocation (Asperges Me).81 This
progression demonstrates a similar temporal disconnection, since Asperges Me is both
a reflection on the crucifixion of Christ and a reflection on baptism, dealing with birth
80

The Confessions of St. Augustine (Dover, 2002), 223.

81

Anne Schreffler, Mein Weg, 323.

94

and death simultaneously. Alternately, one can argue that Asperges Me represents a
temporal disconnect because its text is in the future tense Thou shalt wash me, and I
shall become whiter than snow when the event that washes the souls of the faithful
has already happened, both in the literal sense (since the crucifixion occurred two
thousand years ago) and in the modular sense (since this particular text is a prominent
part of the mass for Holy Saturday, which is after the crucifixion within the chronology of
the Easter Triduum).
This atemporality can be easily connected with Kramer's discussion of musical
time as similar to anthropology's construct of sacred time: a moment-driven, non-linear
temporality characterized as repeatable, reversible, accelerating and decelerating,
possibly stopping.82 Under this interpretation, the texts of these canons necessitate the
creation of sacred time for its traditional purpose: contemplation of the temporal nature
of the eternal (or the spatial nature of the omnipresent), and the establishment of a
connection between the events that underlie the formation of Christianity with the present
day by dissolving the temporal distinctions between them.
Just as the textual events of the second half of Christus factus est cause us to
question our perceptions of the nature of time, the musical events of the second half cause
us to question our assumptions regarding the temporal nature of the piece. Musical
material that seems directed (if not upon first hearing) in the first phrase is made
adirectional upon its return at the end of the piece. Similarly, just as the temporal
discontinuity contained within Christus factus est leads us to perceive the texts of the
other canons as temporally ambiguous (or, rather, to interpret them within a larger,
temporally ambiguous framework), the musical atemporality of this canon highlights the
atemporality of the canons that follow it.
Furthermore, one might argue that placing these texts within this larger temporal
82

Kramer, The Time of Music, 17.

95

framework creates something analogous to the simultaneity of musical events when


considered within the context of musical form. The act of labeling mm. 1-8 as A and mm.
11-13 as A' confers upon them a degree of simultaneity, insofar as it establishes them as
parallel but what does it mean for something directional to be parallel to something
adirectional, or for two very different conceptualizations of temporality to be
simultaneous?
One can find clarifying commonalities between the reconciliation of the
directional with the circuitous, and Bhabha's discussion of the necessity of a temporality
of negotiation, from the perspective of critical theory a temporality in which the
event of theory becomes the negotiation of contradictory and antagonistic instances that
open up hybrid sites and objectives of struggle, and destroy those negative polarities
between knowledge and its objects, and between theory and practical-political reason.83
Elsewhere in the same text, Bhabha refers to critical theory as a parallel to political
activism, implying that critical theory is faced with the same seemingly-paradoxical goal
of reconciling conflicting political objectives and also reconciling this conflict with the
existing body of theory. This conceptualization of temporality seems analogous to those
expressed in Christus factus est a site of temporal hybridization.
Perhaps mathematical set theory also has insight to offer. We define an infinite set
as a set which has itself as a proper subset in other words, a set that can contain itself as
well as other things, a set which can be therefore intuitively smaller than itself. This
seems paradoxical; intuitively, we believe that there ought to be fewer even numbers than
there are whole numbers, since only half of the whole numbers are even, but these two
sets are both the same size, since they are both countably infinite. However, the cause for
alarm lies not in our definition of an infinite set, but in our intuitive (lack of)
understanding of infinity as a concept. What appears to be a contradiction is, in fact, a
83

Homi Bhabha, The Place of Culture (Routledge, 2004), 25.

96

defining characteristic of an infinite set. It seems that difficulty in accepting a temporal


phrase as parallel to an atemporal phrase does not imply a self-contradictory labeling, or
a contradiction at all (as Kramer suggests84), but rather an intrinsic quality of nondirected temporality itself.
On a large scale, Christus factus est can be thought of as accomplishing the
same thing Ligeti argues for Weberns music on a smaller scale:
Webern's music brought about the projection of time flow into an
imaginary space by means of the interchangeability of temporal direction
provoked by the constant reciprocity of the motivic shapes and their
retrogrades... This projection was further strengthened by the 'grouping round
a central axis, which implies a conception of the time-continuum as 'space,'
and by the fusion of the successive and the simultanous in a unifying
structure... Webern's structures seem, if not to move forward in one direction,
at least to circle continuously in their illusory space. 85

Here, Ligeti is referring to retrograde motion and musical palindromes, as well as the
adirectionality of a musical language that admits such structures. It is easy enough to
recast this argument in terms of formal structures, however: Christus factus est can be
thought of as an organizational palindrome, with an A section diatonic in pitch and
chromatic in intervallic content and an A section diatonic in intervallic content and
chromatic in pitch content. Thus, although Christopher Hasty disagrees with Ligeti
claiming that structural connections serve to create a perception of succession between
events, both superficially (a lyrical continuity which can be felt in much of [Webern's]
music) and fundamentally these same aspects of form that unite and give direction to
musical events can themselves function as agents of adirectionality.86
84

Kramer, The Time of Music, 5.

85
Gyorgy Ligeti, Metamorphoses of Musical Form, Die Reihe 7 (Bryn Mawr, PA:
Presser, 1965), 16.
86
Christopher Hasty, On the Problem of Continuity and Succession in Twentieth-Century
Music. Music Theory Spectrum 8 (Spring 1986): 59.

97

Op. 5, no. 4

Predictability of musical material and changes in perceived musical directedness


are at the heart of the listeners perception of form in the fourth of Webern's Fnf Stze
Fr Streichquartett, op. 5. As the listener progresses through the movement, he or she
experiences frequent shifts among non-directed linear time, multiply-directed linear time,
and directed time brought about by these changes eventually creating the same
parallelism between various forms of time heard in op. 16. Since predictability and
directedness are exactly what entropy hopes to measure, it stands to reason that entropy
should have useful things to say about the passage of time in this movement even
though the movements brief eleven-measure length places it in a more precarious
position statistically than the song cycles that typify discussions of entropy in tonal
music.
The fourth movement is ternary. The A section (mm. 1-6) is characterized by
descending, mostly quarter-note figures that travel from instrument to instrument; the end
of this section is marked by an ascending sixteenth-note figure in m. 6. The B section
(mm. 6-10) is characterized by drones in the cello and second violin and by an
accompaniment part in triplets in the viola, throwing the first violin's arco melodic line
(so zart als mglich) into relief. In m. 10 a triplet variant on the ascending figure found in
m. 6 returns to mark the end of the B section. In the remaining measures, the material
from mm. 1-6 returns in condensed form; the second violin line and viola lines in mm.
11-12 are identical in pitch content (except for octave displacement) to the cello line of
m. 5, and the second violin line in mm. 11-12 is similarly taken from the first violin line
of m. 5, while the first violin line recalls the viola line of this same measure.
Much scholarly work on this movement concerns itself with the ascending figure
heard in mm. 6, 10, and 12-13. Lewin terms this the FLYAWAY motive, and notes a
relationship between its three forms: m. 6 is mm. 12-13 transposed up by the first interval

98

of the series, and m. 10 when transposed up by its last interval produces mm. 12-13.
Lewin diagrams this relationship as in Figure 6.1. In this diagram, Lewin depicts the
closing figure (mm. 12-13, Ab-C-D...) as central, as balanced between its two other
forms, the way a tonic is balanced between a dominant and subdominant.87

Figure 6.7: Lewin's depiction of the three flyaway motives

For Lewin, the FLYAWAY motive not only demarcates formal boundaries, but helps us to
interpret the roles of these sections, allowing us to hear the A' as synthetic of and central
to the A and B sections.
Others have noted relationships between the flyaway motive and the pitch content
of the A and B sections. Perle writes that the first four notes of m. 6's motive match the
first violin line in mm. 1-2, uniting material from (0156) and (0167).88 Similarly, Perle
hears the final pitch of m. 13's motive, F#, as recalling the first violin's F#s in m. 2 and m.
12. Arguably, this hearing brings more closure than the previous appearances of the
87
David Lewin, Generalized Musical Intervals and Transformations (New Haven and
London: Yale University Press, 1987), 188-189. Diagram appears on 188.
88
George Perle, Serial Composition and Atonality (Berkeley and Los Angeles: University
of California Press, 1981), 16-18.

99

motive, since m. 13 is the only time the motive ends on the same pitch class as the phrase
immediately before it. The motive in m. 6 ends on Bb, not reaching the C that began and
ended the first violin's line in the same measure (in each case heard an octave lower); the
motive in m. 10 ends on on Eb, overreaching both the first violin's C and the viola's D in
m. 9. This interpretation seems parallel to Lewin's, as the final appearance of the motive
becomes central to the earlier appearances: the first underreaches, the second
overreaches, and the third makes its target.
Alternately, one can hear the final pitches of the m. 10 and m. 13 motives as
recalling the registral extremes of mm. 1-2. These pitches form the upper extremes of
later sections; F# is the highest pitch present in both the A and A' sections, while Eb is the
highest pitch present in the B section. The motive in m. 10 ends on the highest pitch of
the section it completes, achieving provisional closure, but the motive in m. 13 extends
this ascent to include the F# as well.
Pc-set analysis prompts similar synthetic conclusions. Early in the A section,
directedness is established principally through movement between (0156) and (0167).
Perle notes that the viola's E and F# are the only pitches not held in common between {B
C E F} and {B C F F#}; bearing this in mind, the viola's movement between E and F#
becomes an encapsulation of the movement between (0156) and (0167).
The viola line remains preoccupied with these pitches until m. 5, when the first
violin and cello shift from (0167) to (0125) that is, a transformation of (0156), with the
5 held constant and the other pitches transposed up by ic3 (literally, B, C, and E are
transposed up by ic1 while F is transposed down by ic3). This change marks the
beginning of a second, contrasting section within A. Clampitt, taking a more vertical
approach, hears this passage as saturated with (01267), Forte set class 5-7, as he hears
most of the A section in which case m. 5 marks not a change in pitch material but a

100

Figure 6.8: Pc-set analysis of op. 5, no. 4

101

change in how this pitch material is realized.89 Measures 1-4 in the violins are heard in
terms of linear (01267)s but by the cello's imitative entrance at the end of m. 4, these
(01267)s have become cross-sections, indicating a more rapid harmonic motion. The
cello's descending figure is itself heard as components of verticalities, rather than as a
complete pc-set in its own right.

Figure 6.9: Clampitt's analysis of op. 5, no. 4, mm. 1-6


89
David Clampitt, Ramsey Theory, Unary Transformations, and Webern's Op. 5, No. 4,
Integral13 (1999): 84. Diagram appears on 89.

102

Beginning in m. 4, Clampitt no longer hears in terms of discrete, clearly-defined


segments, but rather in terms of overlapping segments. As a result, the appearances of
new pitches prompt retrospective reinterpretation of pc-set identity in a way they
previously did not; a pitch that is initially heard in terms of the preceding pc-set (such as
the cello B natural in m. 4) is then heard as member of the succeeding pc-set. Melodies
that in mm. 3-4 seemed directed and clear become open to reinterpretation. In either case,
a distinction between mm. 3-4 and m. 5 is apparent. Changes in the melodic character of
m. 5 sharpen this distinction. The imitated melody of mm. 3-4 is unstable in terms of
range, descending from F#6 to C2 in less than three measures, whereas the imitated
melody of mm. 5-6 sits comfortably within a perfect fourth. In mm. 3-4, steepness and
imitation create propulsion; the listener may not be able to guess C as the goal of this
passage, but the passage's direction is clear. No such sense of large-scale direction is
evident in the melody of m. 5.
When the flyaway motive makes its first appearance in m. 6, it returns to the pitch
materials and the tessitura of mm. 1-4. Analyzed in terms of pitch class sets, this motive
is an (0167) joined with an (016), one ic1 away from the combined pitch content of mm.
1-2. Boretz analyzes the pitch material of mm. 1-2 as a succession of ic7s joined by ic6s,
instead hearing m. 1 as (4 11)(5 0) and m. 2 as 11)(5 0)(6.90 In these terms, m. 6
extends the chain to 10)(4 11)([5] 0)(6 1)(7, with only the F missing. In either case, the
impression is that of a conclusion, although not a synthetic one; after the contrasting
material of m. 5, the listener has returned to material from the beginning of the section.
At the same time, m. 6 departs from the few commonalities heard between mm. 34 and m. 5. The melodies in mm. 3-4 and m. 5 were both presented imitatively (the latter
less frequently), and both melodies descended (the latter less drastically) whereas m. 6

90
Benjamin Boretz, Meta-Variations Part IV: Analytical Fallout (I), Perspectives of New
Music 11: no. 1 (Autumn-Winter 1972): 217-223.

103

is a monophonic ascent. The pitch materials of m. 6 resolve the contrasts between mm. 34 and m. 5, but the motive's texture and direction introduce contrasts of their own
making m. 6 a point of partial closure.
The B section focuses less on the contrasts that characterized the A section. In
terms of pitch material, the B section is stable, and can in fact be analyzed wholly in
terms of two pc-sets: (0347) in the first violin, and (01468) in the lower three voices. The
first violin can be analyzed as a pair of interlocking major thirds, (04) and (37), in
keeping with the augmented triad articulated in the viola, or as a pair of concatenated
minor thirds, (03) and (47), in keeping with the prominent minor thirds it presents
melodically. Certainly the B section is stable in terms of contour, as well. In short, the B
section provides contrast to the A section because it lacks contrasts within itself.
The A' section resolves all three of the contrasts presented in this movement:
(0156) vs. (0167), mm. 1-4 vs. m. 5, and A vs. B. The pc-set of mm. 11-12, (01267), is
(0156) merged with (0167) and is the pitch-class content of mm. 1-4 presented in the
contour of m. 5. To some degree the imitative character of the A section is retained, most
notably between the second violin/viola and the cello, but the metric alignment between
voices and the shared pitch material among the lower three voices presents homophony
and stasis, as heard in the B section. The stable ranges of m. 5 and mm. 7-9 are preserved,
converting what sounded in mm. 1-2 as a tetrachord plus a singleton into a pentachord.
The A' section also adopts the timbres of the B section: an arco first violin over am Steg
lower voices, presented with harmonics not heard in the A section. Burkhart notes that the
octave doubling between the second violin and viola echoes the octaves between the
second violin's sustained B and the first violin's centric B in mm. 7-10.91
This same resolution of contrasts is evident through entropy-based analysis. The

91
Charles Burkhart, The Symmetrical Source of Webern's Op. 5, No. 4, The Music
Forum 5 (1980): 325.

104

following tables show pitch and interval class for each formal section. (These data are
computed without the flyaway motive.) These figures show support for a ternary hearing
that is, the A and A' sections are both significantly different from the B section, which
has a much lower entropy than either. Officially we cannot conclude anything from the
fact that entropy cannot tell the A and A sections apart (as a result of wide confidence
intervals, owed in part to small sample sizes) but unofficially the entropy scores for A and
A are very close, leading one to believe that these sections are stylistically very similar.

A, mm. 1-6
B, mm. 7-10
A, mm. 11-13

Pitch entropy
4.43
2.75
3.95

Deviation
.44
.28
.58

Table 6.6: Pitch entropy in sections of op. 5, no. 4

Figure 6.10: Pitch entropy in op. 5, no. 4

105

A, mm. 1-6
B, mm. 7-10
A, mm. 11-13

Interval class
entropy
2.58
1.35
2.35

Deviation
.18
.24
.31

Registrally-ordered
interval class entropy
2.89
1.95
2.60

Table 6.7: Interval class entropies in op. 5, no. 4

Figure 6.11: Interval class entropy in op. 5, no. 4

Deviation
.23
.22
.36

106

Figure 6.12: Registrally-ordered interval class entropy in op. 5, no. 4

Pc-set entropy results prompt the same conclusions, as shown in Table 6.8 and
Figure 6.13. The B section has extraordinarily low entropy significantly lower than any
other section at any other cardinality, with the exception of the A section divided into
hexachords but only the second violin line is long enough to support hexachordal
divisions. As a result, the discrete hexachords are very predictable, since they are very
few. In smaller cardinalities, the A section is consistently more predictable than the A
section but less predictable than the B section, which supports the synthetic conclusions
drawn earlier.

A, mm. 1-6
B, mm. 7-10
A, mm. 1113

3
2.31
.88
1.79

Deviation
.10
.07
.13

4
2.5
1.58
2

Deviation
.08
.01
.07

6
2
1
1

Table 6.8: Discrete pc-set entropies in op. 5, no. 4

Deviation
.03
.02
.05

107

Figure 6.13: Discrete pc-set entropies in op. 5, no. 4

Rather than speaking to similar pitch or interval materials, though, the differing
entropy scores speak to the consistency or predictability with which these materials are
used. If Webern had used half steps in the A section with the same consistency as the
major thirds in the B section, both A and B would have the same interval class entropy;
instead, the fact that B has a much lower interval class entropy than A implies that the B
section is more saturated with its primary intervals than the A section is with its. In other
words, the major and minor thirds of the B are aurally more predictable than the minor
seconds and perfect and augmented fourths of the A section.
This decreased predictability reflects the contrasts heard throughout the A section,
and the diversity of pitch and intervallic material that create these contrasts. If the higher

108

entropy of the A section only resulted from these contrasts, though that is, from the
juxtaposition of mm. 1-4 and m. 5 we would expect either section on its own to have an
entropy score more like that of the B section. Tables 6.9ff show that the opposite is the
case.

mm. 1-4
mm. 4-5
mm. 7-10

Pitch entropy
3.78
3.35
2.75

Deviation
.44
.45
.28

Table 6.9: Pitch entropy in op. 5, no. 4, A and B

Figure 6.14: Pitch entropy in op. 5, no. 4, A and B

109
Interval class
entropy

Deviation

Registrally-ordered
interval class entropy

Deviation

mm. 1-4

2.46

.25

2.84

.31

mm. 4-5

2.26

.28

2.54

.32

mm. 7-10

1.35

.24

1.95

.22

Table 6.10: Interval entropies in op. 5, no. 4, A and B

Figure 6.15: Interval class entropy in op. 5, no. 4, A and B

110

Figure 6.16: Registrally-ordered interval class entropy in op. 5, no. 4, A and B

Measures 1-4 are markedly less predictable than mm. 7-10 in all types of entropy.
Measure 4-5 are not significantly different from mm. 7-10 in terms of pitch entropy
which is reasonable, since both passages are characterized by small, stable pitch ranges
but show significant differences in both types of interval class entropy shown above. As a
result, we cannot conclude that the higher entropy of the A section (when considered as a
whole) is owed entirely to contrasts between mm. 1-4 and mm. 4-5. Instead, the
differences in entropy between the A and B sections speak to a more pervasive
unpredictability in the A section.
In particular, this unpredictability gives rise to multiply-directed linear time. The
A section is characterized by goals that are implied but never realized gestures towards
paths that are not taken. This character first becomes apparent in m. 4, the first
transformational activity of the movement. All (0167)s heard thus far have been

111

expressed as the pitch collection {B C F F#}; the movement's sense of propulsion thus far
has been established by alternation between the pitch collections {B C E F} and {B C F
F#}, which has spanned the first three measures. The second violin's imitation of the
descending figure marks a change from the predictability and propulsion of an alternation
between two static pitch collections to the less-predicted propulsion of I4 or T5,
transformations away from these pitch collections a change reflected in the section's
higher pitch entropy score. Then, after the listener is given evidence that canonical
transforms will be relevant, the first violin shifts to a contextual transform of this material
at the pickup to m. 5.
The imitative figure of mm. 3-4 presents its own sort of directedness. A tonallybiased listener may hear these figures as closing gestures, especially since each begins
with a descending perfect fifth and ends with a descending perfect fourth but even in
the absence of these tonal connotations, a gesture that descends as quickly as this one
seems directed simply because it must end when it reaches the bottom of an instrument's
or an ensemble's range. The existence of Shepard tones seems to prove this point; they
are disorienting because they are descending lines that do not land or run out of the
potential for further descent, implying that landing is exactly the behavior we expect from
a descending line.
That these gestures have a goal seems clearer when the cello begins to repeat
the exact pitch classes heard in the first violin. Not only does the listener know that the
larger descent must end, but s/he can now predict where it will end: C. The cello does
reach this goal, in m. 5 but not until after the first violin's new melody has already
begun. By the time the goal is reached, it is no longer the goal; the counterpoint between
the first violin and the cello has become more perceptually salient than the completion of
the cellos descent. Even if the cello line is heard in isolation, the elision of the
descending (0167) gesture with the cellos m. 5 melody makes it less likely the C will be
heard as a point of closure.

112

The B section is comparably straightforward. In mm. 7-10, the first violins


repetition of B natural (which appears each time in this line with a longer rhythmic
duration than the surrounding notes) creates a perception of motion away from a center
(on bts. 2 and 3 of m. 8) followed by a return to this center (m. 9), a perception supported
by the pedal B natural in the second violin.92 It is quite likely that the listener predicts a
return to B following the C neighbor tone at the end of m. 9 a B that can be heard in
the second violin line in m. 10, whose sustained B extends beyond the end of the first
violins melodic line. The result is something more like directed linear time motion
towards and away from a clear pitch center, contrasting with the multiple averted goals of
the A section.93 Of course, this directed melody occurs over the metronome of the viola,
creating a clock-like temporal regularity.
On a large scale, these two approaches to time are united by the flyaway motive.
The motive establishes almost the reverse of multiply-directed linear time: it serves as a
reasonable goal for each section (if a goal that is only evident in retrospect), but each
section makes this goal reasonable in a different way. That is, the presence of the flyaway
motive leads the listener to think of the movements formal sections as parallel in some
way multiple paths taken to reach the same goal.
Unfortunately, entropy cannot tell us much about this motive. A seven-note
motive that repeats no pitches could belong to a fully diatonic piece (recall that, as far as
entropy is concerned, all heptatonic scales are the same, maximally even or otherwise), or

92
The E pedal in the bass can be heard as creating a sense of E major, albeit one
complicated by the Gb augmented triad in the viola. Whether or not the listener chooses to hear
this passage in a key, though, it seems clear that B is the center of the first violin line that a
descending-fifth leap down to E at the end of the passage would seem jarring, not conclusive or
confirming.
93
Of course, any sense of propulsion created by the first violin in mm. 7-10 is limited (or at
least complicated) by the scope of the line. One can argue that propulsion is heard for
approximately half a measure, when the first violin moves away from B natural, and that outside
of this window the passage reverts to stasis, as confirmed by the lower three lines.

113

it could belong to a serial work; without repetitions or a larger sample size, we cannot say
which. The motive's intervallic presentation leaves us with the same problem; <+4 +2 +5
+2 +6 +3> only repeats one interval class, so we do not have enough information to draw
any useful statistical conclusions.
Nevertheless, the musical features that make this motive a logical goal (if only
retrospectively) each time it occurs can be described in terms of entropy and
predictability. The motives appearance in m. 6 creates a sense of provisional closure
through the combination of predictable pitch material and unpredictable texture, register,
and contour the former creating continuity between mm. 1-5 and m. 6, and the latter
ensuring that m. 6 is not simply heard as a continuation of mm. 1-5. If the listener were
inclined to predict this motives arrival as the beginning of a new subsection of A, rather
than its conclusion, this projected subsection would still bear significant formal change; it
could be heard as parallel to the entire A section thus far, perhaps, but not simply as a
continuation of it, as a result of the unpredictability the motive introductes. When the
motive returns in m. 10, it provides contrast with the more centric contours of the B
section, and presents in a single measure of one instrumental line almost as many distinct
pitches as were heard in the entire B section. These discontinuities lead the listener away
from the single-directedness of the B section and back into the multiply-directed time that
characterizes the piece as a whole not resolving the first violins C, but removing the
sense of linearity and directedness that necessitated the Cs resolution in the first place.
Of course, the motives reappearance in m. 10 also accesses the closure the motive
created in m. 6; since it has been designated as a signal of closure previously, we are
inclined to hear it as bearing closure again.
When the A' section synthesizes the factors that led to multiple-directedness in the
A section with those that led to single-directedness in the B section, the result is stasis
almost moment time. The contours and stable pitch ranges of m. 5 and mm. 7-10 are now
presented without direction; we do not hear movement toward or away from a single

114

centric pitch, as in the B section, or the transformational activity that depicted m. 5 as


motion away from the pc-sets of mm. 1-4. Instead, we hear a pc-set closely related to the
pc-set of the flyaway motive, the (0167)+(0156) that the flyaway motive was heard as a
logical extension of when it first appeared in m. 6. That is, mm. 11-12 present no
propulsion in terms of pitch content because we have already reached the goal. We hear a
pc-set that now implies stasis presented within static contours and ranges.
The pizzicati in m. 12 close off this passage, echoing the pizzicati that closed mm.
1-2 and drawing additional strength from their low register (dropping twenty-two
semitones in the second violin, twenty-three in the viola, and eighteen in the cello).
Ending the passage of stasis in mm. 11-12 makes real closure possible, and we hear the
flyaway motive that ends the piece immediately afterwards.
The stasis of these measures is born of the synthesis of the A and B sections.
Perhaps in some sense moment time is itself synthetic of multiply-directed and singlydirected time the stability of the latter combined with the problematized directionality
of the former, the predictability of the latter combined with the unpredictability of the
former. Under this interpretation, the A' section resolves not just the pitch, texture, and
contour contrasts between the A and B sections, but also the contrasting directionalities of
these sections.

115
CHAPTER VII
CONCLUSIONS

Perhaps the greatest utility of information theory entropy as an analytic tool lies in
its versatility. A statement from one of the earliest articles on information theory is often
quoted to this effect.
This is a theory so general that one does not need to say what kinds of
symbols are being considered... the theory is deep enough so that the
relationships it reveals indiscriminately apply to all these and to other
forms of communication.94

Nearly all musical phenomena can, in some sense, be considered information. The analyst
has a great deal of latitude in determining how best to understand a particular musical
work in terms of information; an alphabet can be constructed to allow for meaningful
description and analysis of almost any musical work, and the calculations that result can
become evidence for a wide variety of conclusions. Consequently, information theory
entropy is not the mindless plugging of numbers into formulas, any more than statistical
hypothesis testing is. Rather, it is reflective of the analyst's choices and assumptions.
Of particular interest are information-theoretic alphabets dealing with meter. An
alphabet based on beat class sets could offer insight into the predictability of a work's
meter, providing an easy method of comparison between works. This analytical approach
would be especially useful for rap, in particular American rap of the late 1990s and
2000s, in which accent patterns created by rhyme scheme can be an important designator
of form. Results obtained using this alphabet may provide insight into larger-scale trends
in the genre with regard to metric dissonance articulated by rhyme scheme. Beat class
entropy, if used in conjunction with CSEG entropy, could also provide a more
94
14.

Warren Weaver, Mathematics of Communication, Scientific American 181:1 (1949),

116

sophisticated representation of motivic predictability.


Also promising is the extension of pc-set entropy into transformational entropy.
This approach would not only provide insight into particular composers' use of
transformations, but also a plausible means of comparing transformational content
between wildly different musical styles comparing, for example, canonic and contextual
transforms in Webern with Neo-Riemannian transforms in Wagner, based on the
predictability and frequency of their use.
In addition to its versatility, information theory entropy is effective as an
analytical metaphor because it describes something fundamental to musical experience:
the transmission and reception of information, however one wishes to define that term. It
is information, in some sense, that Adorno describes when he writes, regarding the op. 5
Fnf Stze, Their intensity of concentration is what makes them a totality: a sigh, as
Schoenberg noted with admiration, was worth an entire novel, a tense gesture of three
notes on the violin was literally the equal of a symphony a density of information that
alters the listeners perception of time and informs nearly every aspect of the listening
experience.95 Although information theory entropy is mostly known as a formula and
within music theory is infamous as pseudoscience its conceptual relationship with
information allows it to describe an experience of this altered time, something that
historically resists description. Fifty years of music-theoretic use of information theory
have explored entropys ability to quantify objective musical perceptions; what remains is
entropys ability to describe the non-objective, to act as a quantitative metaphor for the
experiential, with a depth that extends beyond Shannons simple formula.

95
Theodor Adorno, Anton von Webern, in Sound Figures, trans. Rodney Livingstone
(Stanford, CA: Stanford University Press, 1999), 96.

117
BIBLIOGRAPHY

Adlington, Robert. Moving Beyond Motion: Metaphors for Changing Sound. Journal
of the Royal Musical Association 128: no. 2 (2003): 297-318.
-------. Musical Temporality: Perspectives from Adorno and de Man. Repercussions 6:
no. 1 (Spring 1997): 5-59.
Adorno, Theodor. Essays on Music, ed. Richard Leppert, trans. Susan Gillespie. Berkeley,
Los Angeles: University of California Press, 2002.
-------. Quasi una Fantasia: Essays on Modern Music, trans. Rodney Livingstone. New
York: Verso, 1998.
-------. Sound Figures, trans. Rodney Livingstone. Stanford: Stanford University Press,
1999.
Appleton, Jon. Re-evaluating the Principle of Expectation in Electronic Music.
Perspectives of New Music 8: no. 1 (Autumn/Winter 1969): 106-111.
Augustine. The Confessions of St. Augustine. Mineola, New York: Dover, 2002.
Bailey, Kathryn. Rhythm and Meter in Weberns Late Works. Journal of the Royal
Musical Association 120: no. 2 (1995): 251-280.
-------. The Twelve-Note Music of Anton Webern. Cambridge University Press, 1991.
Barry, Barbara. Musical Time: The Sense of Order. Stuyvesant, NY: Pendragon Press,
1990.
Bhabha, Homi. The Place of Culture. London, New York: Routledge, 1994.
Boretz, Benjamin. Meta-Variations Part IV: Analytical Fallout (I). Perspectives of New
Music 11: no. 1 (Autumn-Winter 1972): 146-223.
Boykan, Martin. Silence and Slow Time: Studies in Musical Narrative. Oxford:
Scarecrow Press, 2004.
Broyles, Michael and John Titchner. Meyer, Meaning, and Music. Journal of Aesthetics
and Art Criticism 32: no. 1 (Autumn 1973): 17-25.
Burkhart, Charles. The Symmetrical Source of Webern's Op. 5, No. 4. The Music
Forum 5 (1980): 317-334.
Clampitt, David. Ramsey Theory, Unary Transformations, and Webern's Op. 5, No. 4.

118
Integral 13 (1999): 63-93.
Clifton, Thomas. The Poetics of Musical Silence. The Musical Quarterly 62: no. 2
(April. 1976): 163-181.
Coffman, Don. Measuring Musical Originality Using Information Theory. Psychology
of Music 20 (1992): 154-161.
Cone, Edward. Stravinsky: The Progress of a Method. Perspectives of New Music 1:
no. 1 (Autumn 1962): 18-26.
Drew, James. Information, Space, and a New Time-Dialectic. Journal of Music Theory
12: no. 1 (Spring 1968): 86-103.
Eddington, A. The Nature of the Physical World. Ann Arbor: University of Michigan
Press, 1935.
Escot, Pozzi. Towards a Theoretical Concept: Non-linearity in Weberns op. 11, no. 1.
Sonus 3: no. 1 (Fall 1982): 18-29.
Forte, Allen. Aspects of Rhythm in Weberns Atonal Music. Music Theory Spectrum 2
(Spring 1980): 90-109.
-----------. The Atonal Music of Anton Webern. Yale University Press, 1998.
Franchisena, Cesar. El tiempo en la composicion actual, Revista del Instituto Superior
de Musica 3 (Nov. 1993): 109-135.
Fraser, J. T. The Art of the Audible Now. Music Theory Spectrum 7 (Spring 1985): 181184.
Hasty, Christopher. On the Problem of Succession and Continuity in Twentieth-Century
Music. Music Theory Spectrum 8 (Spring 1986): 58-74.
-----------. Rhythm in Post-Tonal Music: Preliminary Questions of Duration and
Motion. Journal of Music Theory 25: no. 2 (Autumn 1981): 183-216.
Hatten, Robert. The Troping of Temporality in Music. Approaches to Meaning in
Music, ed. Byron Almen and Edward Pearsall. Bloomington: Indiana University Press,
2006.
Hawes, Vanessa. Number Fetishism: The History of the Use of Information Theory as a
Tool for Musical Analysis. Music's Intellectual History 2009, ed. Zdravko Blazekovic
and Barbara Dobbs Mackenzie. New York: RILM, 2009.
Hellmuth Margullis, Elizabeth and Andrew Beatty. Musical Style, Psychoaesthetics, and

119
Prospects for Entropy as an Analytic Tool. Computer Music Journal 32: no. 4 (Winter
2008): 64-78.
Hessert, Norman. The Use of Information Theory in Musical Analysis. Ph.D diss.,
Indiana University, 1971.
Hiller, Lejaren and Calvert Bean. Information Theory Analyses of Four Sonata
Expositions. Journal of Music Theory 10: no. 1 (1966): 96-137.
Hiller, Lejaren and Ramon Fuller. Structure and Information in Webern's Symphonie,
op. 21. Journal of Music Theory 11: no. 1 (Spring 1967): 60-115.
Johnson, Julian. Webern and the Transformation of Nature. Cambridge: Cambridge
University Press, 1999.
Khinchin, Aleksandr Ikolevich. Mathematical Foundations of Information Theory.
Mineola, New York: Dover, 1957.
Knopoff, Leon and William Hutchinson. Information Theory for Musical Continua.
Journal of Music Theory 25: no. 1 (Spring 1981): 17-44.
-----------. On the Entropy of Music: The Influence of Sample Length. Journal of Music
Theory 27: no. 1 (Spring 1983): 75-97.
Kraehenbuehl, David and Edgar Coons. Information as a Measure of the Experience of
Music. Journal of Aesthetics and Art Criticism 17: no. 4 (June 1959): 510-522.
Kramer, Jonathan. Moment Form in Twentieth-Century Music. The Musical Quarterly
64: no. 2 (April 1978): 177-194.
-----------. Multiple and Non-Linear Time in Beethovens Op. 135. Perspectives of New
Music 11: no. 2 (Spring/Summer 1973): 122-145.
-----------. New Temporalities in Music. Critical Inquiry 7: no. 3 (Spring 1981): 539556.
-----------. The Time of Music. New York: Schirmer, 1988.
Levinson, Jerrold. Music in the Moment. Ithaca and London: Cornell University Press,
1997.
Lewin, David. Generalized Musical Intervals and Transformations. New Haven and
London: Yale University Press, 1987.
-----------. A Metrical Problem in Weberns Op. 27. Music Analysis 12: no. 3 (1993):
343-354.

120

-----------. Music Theory, Phenomenology, and Modes of Perception. In Studies in


Music with Text, Oxford: Oxford University Press, 2006: 53-108.
-----------. Some Applications of Communication Theory to the Study of Twelve-Tone
Music. Journal of Music Theory 12: no. 1 (Spring 1968): 50-84.
-----------. Some New Constructs Involving Abstract PCSets, and Probabilistic
Applications. Perspectives of New Music 18: no. 1/2 (Autumn 1979 Summer 1980):
433-444.
-----------. Thoughts on Klumpenhower Networks and Perle-Lansky Cycles. Music
Theory Spectrum 24: no. 2 (Autumn 2002): 196-230.
Ligeti, Gyorgi. Metamorphoses of Musical Form. Die Reihe 7 (1965): 5-19.
Lippman, Edward. Progressive Temporality in Music. The Journal of Musicology 3: no.
2 (Spring 1984): 121-141.
Lissa, Zofia. The Temporal Nature of a Musical Work. Journal of Aesthetics and Art
Criticism 26: no. 4 (Summer 1968): 529-538.
Majernik, Vladimir. The Determination of the Predictional Entropy in Music.
Musicologica Slovaca 6 (1978): 46-50.
Manzara, Leonard, Ian Witten, and Mark James. On the Entropy of Music: An
Experiment with Bach Chorale Melodies. Leonardo Music Journal 2:no. 1 (1992): 8188
Meyer, Leonard. Explaining Music: Essays and Explorations. Berkeley: University of
California Press, 1973.
-----------. Meaning in Music and Information Theory. Journal of Aesthetics and Art
Criticism 15: no. 4 (June 1957): 412-424.
Moles, Abraham. Information Theory and Aesthetic Perception. Champaign, IL:
University of Illinois Press, 1956.
Nattiez, Jean-Jacques. The Battle of Chronos and Orpheus. Oxford: Oxford University
Press, 1993.
-----------. Music and Discourse. Princeton, NJ: Princeton University Press, 1990.
Nauta, Doede. The Meaning of Information. The Hague: Mouton, 1972.
Perle, George. Serial Composition and Atonality. Berkeley and Los Angeles: University

121
of California Press, 1981.
Rhodes, James. Musical Data as Information: A General-Systems Perspective on
Musical Analysis. Computing in Musicology 10 (1995-1996): 163-180.
River Jones, James. Some Aspects of Rhythm and Meter in Weberns Op. 27.
Perspectives of New Music 7: no. 1 (Autumn/Winter 1968): 103-109.
Rochberg, George. The Aesthetics of Survival: A Composer's View of Twentieth-Century
Music. Ann Arbor: University of Michigan Press, 2004.
Roeder, John. A Calculus of Accent. Journal of Music Theory 39: no. 1 (Spring 1995):
1-46.
Rosenfield, Lawrence. Aristotle and Information Theory. Paris, the Hague: Mouton,
1971.
Schenker, Heinrich. Harmony. Translated by Oswald Jones. Chicago: University of
Chicago Press, 1954.
Shannon, Claude. A Mathematical Theory of Communication, Bell System Technical
Journal 27 (1948): 379-423.
---------- and Warren Weaver. A Mathematical Model of Communication. Urbana:
University of Illinois Press, 1949.
Shelley, Percy. A Defence of Poetry and Other Essays. 1840; Project Gutenberg, 2005.
http://www.gutenberg.org/etext/5428.
Sherburne, Donald. Meaning and Music. Journal of Aesthetics and Art Criticism 24:
no. 4 (Summer 1966): 579-583.
Shreffler, Anne. Mein Weg geht jetzt vorueber': The Vocal Origins of Webern's TwelveTone Composition. Journal of the American Musicological Society 47: no. 2 (Summer
1994): 275-339.
Snyder, John. Entropy as a Measure of Musical Style: The Influence of A Priori
Assumptions. Spectrum 12: no. 1 (Spring 1990): 121-160.
Sonntag, Brunhilde. "Adornos Webern-Kritik." In Musik im Diskurs Band 2: Adorno in
seinen musikalischen Schriften. Regensburg: Gustav Bosse Verlag, 1987: 139-161.
Stockhausen, Karlheinz. Structure and Experiential Time, Die Reihe 2, trans. Eric
Smith. Bryn Mawr, PA: Presser, 1959: 64-74.
Trcke, Berthold. Ein Stehenbleiben, das in die Welte geht: Der Gestus der Zeit in

122
Weberns Sptwerk. Musik-Konzepte (November 1984): 8-25.
Underwood, James. Time and Activity in Webern's Opus 10. Indiana Theory Review 3:
no. 2 (Winter 1980): 31-38.
Weaver, Warren. Mathematics of Communication. Scientific American 181:1 (1949):
11-15.
Youngblood, Joseph. Style as Information. Journal of Music Theory: 2, no. 1 (Apr.
1958): 24-35.
Yuogi, Joji. Temporality and I: From the Composers Workshop. Perspectives of New
Music 31: no. 2 (Summer 1993): 216-228.

S-ar putea să vă placă și